• Title/Summary/Keyword: 분산 클라우드

Search Result 269, Processing Time 0.026 seconds

Deep Reinforcement Learning Based Distributed Offload Policy for Collaborative Edge Computing in Multi-Edge Networks (멀티 엣지 네트워크에서 협업 엣지컴퓨팅을 위한 심층강화학습 기반 분산 오프로딩 정책 연구)

  • Junho Jeong;Joosang Youn
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.5
    • /
    • pp.11-19
    • /
    • 2024
  • As task offloading from user devices transitions from the cloud to the edge, the demand for efficient resource management techniques has emerged. While numerous studies have employed reinforcement learning to address this challenge, many fail to adequately consider the overhead associated with real-world offloading tasks. This paper proposes a reinforcement learning-based distributed offloading policy generation method that incorporates task overhead. A simulation environment is constructed to validate the proposed approach. Experimental results demonstrate that the proposed method reduces edge queueing time, achieving up to 46.3% performance improvement over existing approaches.

A Study on the Cloud Service Model of CaaS Based on the Object Identification, ePosition, with a Structured Form of Texts (문자열로 구조화된 사물식별아이디 이포지션(ePosition) 기반의 클라우드 CaaS(Contents as a Service) 서비스 모델에 관한 연구)

  • Lee, Sang-Zee;Kang, Myung-Su;Cho, Won-Hee
    • Information Systems Review
    • /
    • v.15 no.3
    • /
    • pp.129-139
    • /
    • 2013
  • The Internet of Things (or IoT for short) which refers to uniquely identifiable objects and their virtual representations in an Internet-like structure is to be reality today. The amount of data on IoT is expected to increase abruptly and there are several key issues like usefulness interoperability between multiple distributes systems, services and databases. In this paper a methodology is proposed to realize a recently developed cloud service model, Contents as a Service (CaaS), which is contents delivery model referred to as 'on-demand contents'. In the proposed method, the global object identification, ePosition, comprising the structured form of two sorts of text strings with a separation symbol like # is applied to identify a specific content and registered with the content at the same server. It is easy-to-realize and effective to solve the interoperability problem systematically and logically. Some APIs for the proposed CaaS service are to be converged to provide some upgraded cloud service model such as 'CaaS supported SaaS' and 'CaaS supported PaaS'.

  • PDF

A Practical Quality Model for Evaluation of Mobile Services Based on Mobile Internet Device (모바일 인터넷 장비에 기반한 모바일 서비스 평가를 위한 실용적인 품질모델)

  • Oh, Sang-Hun;La, Hyun-Jung;Kim, Soo-Dong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.5
    • /
    • pp.341-353
    • /
    • 2010
  • Mobile Internet Device (MID) allows users to flexibly use various forms of wireless internet such as Wi-Fi, GSM, CDMA, and 3G. Using such Internet, MID users can utilize application services. MID usage is expected to grow due to the benefits of portability, Internet accessibility, and other convenience. However, it has resource constraints such as limited CPU power, small memory size, limited battery life, and small screen size. Consequently, MIDs are not capable to hold large-sized complex applications and to process a large amount of data in memory. An effective solution to remedy these limitations is to develop cloud services for the required application functionality, to deploy them on the server side, and to let MID users access the services through internet. A major concern on running cloud services for MIDs is the potential problems with low Quality of Service (QoS) due to the characteristics of MIDs. Even measuring the QoS of such services is more technically challenging than conventional quality measurements. In this paper, we first identify the characteristics of MIDs and cloud services for MIDs. Based on these observations, we derive a number of quality attributes and their metrics for measuring QoS of mobile services. A case study of applying the proposed quality model is presented to show its effectiveness and applicability.

Storm-Based Dynamic Tag Cloud for Real-Time SNS Data (실시간 SNS 데이터를 위한 Storm 기반 동적 태그 클라우드)

  • Son, Siwoon;Kim, Dasol;Lee, Sujeong;Gil, Myeong-Seon;Moon, Yang-Sae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.6
    • /
    • pp.309-314
    • /
    • 2017
  • In general, there are many difficulties in collecting, storing, and analyzing SNS (social network service) data, since those data have big data characteristics, which occurs very fast with the mixture form of structured and unstructured data. In this paper, we propose a new data visualization framework that works on Apache Storm, and it can be useful for real-time and dynamic analysis of SNS data. Apache Storm is a representative big data software platform that processes and analyzes real-time streaming data in the distributed environment. Using Storm, in this paper we collect and aggregate the real-time Twitter data and dynamically visualize the aggregated results through the tag cloud. In addition to Storm-based collection and aggregation functionalities, we also design and implement a Web interface that a user gives his/her interesting keywords and confirms the visualization result of tag cloud related to the given keywords. We finally empirically show that this study makes users be able to intuitively figure out the change of the interested subject on SNS data and the visualized results be applied to many other services such as thematic trend analysis, product recommendation, and customer needs identification.

Blockchain-based multi-IoT verification model for overlay cloud environments (오버레이 클라우드 환경을 위한 블록체인 기반의 다중 IoT 검증 모델)

  • Jeong, Yoon-Su;Kim, Yong-Tae;Park, Gil-Cheol
    • Journal of Digital Convergence
    • /
    • v.19 no.4
    • /
    • pp.151-157
    • /
    • 2021
  • Recently, IoT technology has been applied to various cloud environments, requiring accurate verification of various information generated by IoT devices. However, due to the convergence of IoT technologies and 5G technologies, accurate analysis is required as IoT information processing is rapidly processed. This paper proposes a blockchain-based multi-IoT verification model for overlay cloud environments. The proposed model multi-processes IoT information by further classifying IoT information two layers (layer and layer) into bits' blockchain to minimize the bottleneck of overlay networks while ensuring the integrity of information sent and received from embedded IoT devices within local IoT groups. Furthermore, the proposed model allows the layer to contain the weight information, allowing IoT information to be easily processed by the server. In particular, transmission and reception information between IoT devices facilitates server access by distributing IoT information from bits into blockchain to minimize bottlenecks in overlay networks and then weighting IoT information.

Shared Distributed Big-Data Processing Platform Model: a Study (대용량 분산처리 플랫폼 공유 모델 연구)

  • Jeong, Hwanjin;Kang, Taeho;Kim, GyuSeok;Shin, YoungHo;Jeong, Jinkyu
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.11
    • /
    • pp.601-613
    • /
    • 2016
  • With the increasing need for big data processing, building a shared big data processing platform is important to minimize time and monetary costs. In shared big data processing, multitenancy is a major requirement that needs to be addressed, in order to provide a single isolated personal big data platform for each user, but to share the underlying hardware is shared among users to increase hardware utilization. In this paper, we explore two well-known shared big data processing platform models. One is to use a native Hadoop cluster, and the other is to build a virtual Hadoop cluster for each user. For each model we verified whether it is sufficient to support multi-tenancy. We also present a method to complement unsupported multi-tenancy features in a native Hadoop cluster model. Lastly we built prototype platforms and compared the performance of both models.

A Study on Effective Peer Search Algorithm Considering Peer's Attribute using JXTA in Peer-to-Peer Network (JXTA를 이용한 Peer-to-Peer 환경에서 Peer의 성향을 고려한 Peer 탐색 알고리즘의 연구)

  • Lee, Jong-Seo;Moon, Il-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.632-639
    • /
    • 2011
  • Searching distributed resource efficiently is very important in distributed computing, cloud computing environment. Distributed resource searching may have system overheads and take much time in proportion to the searching number, because distributed resource searching has to circuit many peers for searching information. The open-source community project JXTA defines an open set of standard protocols for ad hoc, pervasive, peer-to-peer computing as a common platform for developing a wide variety of decentralized network applications. In this paper, we proposed peer search algorithm based on JXTA-Sim. original JXTA peer searching algorithm selected a loosely-consistent DHT. Our Lookup algorithm decreases message number of WALK_LOOKUP and reduce the network system overload, and we make a result of same performance both original algorithm and our proposed algorithm.

A DDMPF(Distributed Data Management Protocol using FAT) Design of Self-organized Storage for Negotiation among a Client and Servers based on Clouding (클라우딩 기반에서 클라이언트와 서버간 협상을 위한 자가 조직 저장매체의 DDMPF(Distributed Data Management Protocol using FAT) 설계)

  • Lee, Byung-Kwan;Jeong, Eun-Hee;Yang, Seung-Hae
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.8
    • /
    • pp.1048-1058
    • /
    • 2012
  • This paper proposes the DDMPF(Distributed Data Management Protocol using FAT) which prevents data loss and keeps the security of self-organized storages by comprising a client, a storage server, and a verification server in clouding environment. The DDMPF builds a self-organized storage server, solves data loss by decentralizing the partitioned data in it in contrast to the centralized problem and the data loss caused by the storage server problems of existing clouding storages, and improves the efficiency of distributed data management with FAT(File Allocation Table). And, the DDMPF improves the reliability of data by a verification server's verifying the data integrity of a storage server, and strengthens the security in double encryption with a client's private key and the system's master key using EC-DH algorithm. Additionally, the DDMPF limits the number of verification servers and detects the flooding attack by setting the TS(Time Stamp) for a verification request message and the replay attack by using the nonce value generated newly, whenever the verification is requested.

A Scheduling Algorithm for Performance Enhancement of Science Data Center Network based on OpenFlow (오픈플로우 기반의 과학실험데이터센터 네트워크의 성능 향상을 위한 스케줄링 알고리즘)

  • Kong, Jong Uk;Min, Seok Hong;Lee, Jae Yong;Kim, Byung Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.9
    • /
    • pp.1655-1665
    • /
    • 2017
  • Recently data centers are being constructed actively by many cloud service providers, enterprises, research institutes, etc. Generally, they are built on tree topology using ECMP data forwarding scheme for load balancing. In this paper, we examine data center network topologies like tree topology and fat-tree topology, and load balancing technologies like MLAG and ECMP. Then, we propose a scheduling algorithm to efficiently transmit particular files stored on the hosts in the data center to the destination node outside the data center, where fat-tree topology and OpenFlow protocol between infrastructure layer and control layer are used. We run performance analysis by numerical method, and compare the analysis results with those of ECMP. Through the performance comparison, we show the outperformance of the proposed algorithm in terms of throughput and file transfer completion time.

Distributed data deduplication technique using similarity based clustering and multi-layer bloom filter (SDS 환경의 유사도 기반 클러스터링 및 다중 계층 블룸필터를 활용한 분산 중복제거 기법)

  • Yoon, Dabin;Kim, Deok-Hwan
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.60-70
    • /
    • 2018
  • A software defined storage (SDS) is being deployed in cloud environment to allow multiple users to virtualize physical servers, but a solution for optimizing space efficiency with limited physical resources is needed. In the conventional data deduplication system, it is difficult to deduplicate redundant data uploaded to distributed storages. In this paper, we propose a distributed deduplication method using similarity-based clustering and multi-layer bloom filter. Rabin hash is applied to determine the degree of similarity between virtual machine servers and cluster similar virtual machines. Therefore, it improves the performance compared to deduplication efficiency for individual storage nodes. In addition, a multi-layer bloom filter incorporated into the deduplication process to shorten processing time by reducing the number of the false positives. Experimental results show that the proposed method improves the deduplication ratio by 9% compared to deduplication method using IP address based clusters without any difference in processing time.