• Title/Summary/Keyword: Network Latency

Search Result 764, Processing Time 0.019 seconds

Resource Allocation for Heterogeneous Service in Green Mobile Edge Networks Using Deep Reinforcement Learning

  • Sun, Si-yuan;Zheng, Ying;Zhou, Jun-hua;Weng, Jiu-xing;Wei, Yi-fei;Wang, Xiao-jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2496-2512
    • /
    • 2021
  • The requirements for powerful computing capability, high capacity, low latency and low energy consumption of emerging services, pose severe challenges to the fifth-generation (5G) network. As a promising paradigm, mobile edge networks can provide services in proximity to users by deploying computing components and cache at the edge, which can effectively decrease service delay. However, the coexistence of heterogeneous services and the sharing of limited resources lead to the competition between various services for multiple resources. This paper considers two typical heterogeneous services: computing services and content delivery services, in order to properly configure resources, it is crucial to develop an effective offloading and caching strategies. Considering the high energy consumption of 5G base stations, this paper considers the hybrid energy supply model of traditional power grid and green energy. Therefore, it is necessary to design a reasonable association mechanism which can allocate more service load to base stations rich in green energy to improve the utilization of green energy. This paper formed the joint optimization problem of computing offloading, caching and resource allocation for heterogeneous services with the objective of minimizing the on-grid power consumption under the constraints of limited resources and QoS guarantee. Since the joint optimization problem is a mixed integer nonlinear programming problem that is impossible to solve, this paper uses deep reinforcement learning method to learn the optimal strategy through a lot of training. Extensive simulation experiments show that compared with other schemes, the proposed scheme can allocate resources to heterogeneous service according to the green energy distribution which can effectively reduce the traditional energy consumption.

Provably-Secure and Communication-Efficient Protocol for Dynamic Group Key Exchange (안전성이 증명 가능한 효율적인 동적 그룹 키 교환 프로토콜)

  • Junghyun Nam;Jinwoo Lee;Sungduk Kim;Seungjoo Kim;Dongho Won
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.4
    • /
    • pp.163-181
    • /
    • 2004
  • Group key agreement protocols are designed to solve the fundamental problem of securely establishing a session key among a group of parties communicating over a public channel. Although a number of protocols have been proposed to solve this problem over the years, they are not well suited for a high-delay wide area network; their communication overhead is significant in terms of the number of communication rounds or the number of exchanged messages, both of which are recognized as the dominant factors that slow down group key agreement over a networking environment with high communication latency. In this paper we present a communication-efficient group key agreement protocol and prove its security in the random oracle model under the factoring assumption. The proposed protocol provides perfect forward secrecy and requires only a constant number of communication rounds for my of group rekeying operations, while achieving optimal message complexity.

Construction of a Virtual Mobile Edge Computing Testbed Environment Using the EdgeCloudSim (EdgeCloudSim을 이용한 가상 이동 엣지 컴퓨팅 테스트베드 환경 개발)

  • Lim, Huhnkuk
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.8
    • /
    • pp.1102-1108
    • /
    • 2020
  • Mobile edge computing is a technology that can prepare for a new era of cloud computing and compensate for shortcomings by processing data near the edge of the network where data is generated rather than centralized data processing. It is possible to realize a low-latency/high-speed computing service by locating computing power to the edge and analyzing data, rather than in a data center far from computing and processing data. In this article, we develop a virtual mobile edge computing testbed environment where the cloud and edge nodes divide computing tasks from mobile terminals using the EdgeCloudSim simulator. Performance of offloading techniques for distribution of computing tasks from mobile terminals between the central cloud and mobile edge computing nodes is evaluated and analyzed under the virtual mobile edge computing environment. By providing a virtual mobile edge computing environment and offloading capabilities, we intend to provide prior knowledge to industry engineers for building mobile edge computing nodes that collaborate with the cloud.

Extracting optimal moving patterns of edge devices for efficient resource placement in an FEC environment (FEC 환경에서 효율적 자원 배치를 위한 엣지 디바이스의 최적 이동패턴 추출)

  • Lee, YonSik;Nam, KwangWoo;Jang, MinSeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.162-169
    • /
    • 2022
  • In a dynamically changing time-varying network environment, the optimal moving pattern of edge devices can be applied to distributing computing resources to edge cloud servers or deploying new edge servers in the FEC(Fog/Edge Computing) environment. In addition, this can be used to build an environment capable of efficient computation offloading to alleviate latency problems, which are disadvantages of cloud computing. This paper proposes an algorithm to extract the optimal moving pattern by analyzing the moving path of multiple edge devices requiring application services in an arbitrary spatio-temporal environment based on frequency. A comparative experiment with A* and Dijkstra algorithms shows that the proposed algorithm uses a relatively fast execution time and less memory, and extracts a more accurate optimal path. Furthermore, it was deduced from the comparison result with the A* algorithm that applying weights (preference, congestion, etc.) simultaneously with frequency can increase path extraction accuracy.

Implementation of FPGA-based Accelerator for GRU Inference with Structured Compression (구조적 압축을 통한 FPGA 기반 GRU 추론 가속기 설계)

  • Chae, Byeong-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.850-858
    • /
    • 2022
  • To deploy Gate Recurrent Units (GRU) on resource-constrained embedded devices, this paper presents a reconfigurable FPGA-based GRU accelerator that enables structured compression. Firstly, a dense GRU model is significantly reduced in size by hybrid quantization and structured top-k pruning. Secondly, the energy consumption on external memory access is greatly reduced by the proposed reuse computing pattern. Finally, the accelerator can handle a structured sparse model that benefits from the algorithm-hardware co-design workflows. Moreover, inference tasks can be flexibly performed using all functional dimensions, sequence length, and number of layers. Implemented on the Intel DE1-SoC FPGA, the proposed accelerator achieves 45.01 GOPs in a structured sparse GRU network without batching. Compared to the implementation of CPU and GPU, low-cost FPGA accelerator achieves 57 and 30x improvements in latency, 300 and 23.44x improvements in energy efficiency, respectively. Thus, the proposed accelerator is utilized as an early study of real-time embedded applications, demonstrating the potential for further development in the future.

Partial Offloading System of Multi-branch Structures in Fog/Edge Computing Environment (FEC 환경에서 다중 분기구조의 부분 오프로딩 시스템)

  • Lee, YonSik;Ding, Wei;Nam, KwangWoo;Jang, MinSeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.10
    • /
    • pp.1551-1558
    • /
    • 2022
  • We propose a two-tier cooperative computing system comprised of a mobile device and an edge server for partial offloading of multi-branch structures in Fog/Edge Computing environments in this paper. The proposed system includes an algorithm for splitting up application service processing by using reconstructive linearization techniques for multi-branch structures, as well as an optimal collaboration algorithm based on partial offloading between mobile device and edge server. Furthermore, we formulate computation offloading and CNN layer scheduling as latency minimization problems and simulate the effectiveness of the proposed system. As a result of the experiment, the proposed algorithm is suitable for both DAG and chain topology, adapts well to different network conditions, and provides efficient task processing strategies and processing time when compared to local or edge-only executions. Furthermore, the proposed system can be used to conduct research on the optimization of the model for the optimal execution of application services on mobile devices and the efficient distribution of edge resource workloads.

UAV-MEC Offloading and Migration Decision Algorithm for Load Balancing in Vehicular Edge Computing Network (차량 엣지 컴퓨팅 네트워크에서 로드 밸런싱을 위한 UAV-MEC 오프로딩 및 마이그레이션 결정 알고리즘)

  • A Young, Shin;Yujin, Lim
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.12
    • /
    • pp.437-444
    • /
    • 2022
  • Recently, research on mobile edge services has been conducted to handle computationally intensive and latency-sensitive tasks occurring in wireless networks. However, MEC, which is fixed on the ground, cannot flexibly cope with situations where task processing requests increase sharply, such as commuting time. To solve this problem, a technology that provides edge services using UAVs (Unmanned Aerial Vehicles) has emerged. Unlike ground MEC servers, UAVs have limited battery capacity, so it is necessary to optimize energy efficiency through load balancing between UAV MEC servers. Therefore, in this paper, we propose a load balancing technique with consideration of the energy state of UAVs and the mobility of vehicles. The proposed technique is composed of task offloading scheme using genetic algorithm and task migration scheme using Q-learning. To evaluate the performance of the proposed technique, experiments were conducted with varying mobility speed and number of vehicles, and performance was analyzed in terms of load variance, energy consumption, communication overhead, and delay constraint satisfaction rate.

Design of A new Algorithm by Using Standard Deviation Techniques in Multi Edge Computing with IoT Application

  • HASNAIN A. ALMASHHADANI;XIAOHENG DENG;OSAMAH R. AL-HWAIDI;SARMAD T. ABDUL-SAMAD;MOHAMMED M. IBRAHM;SUHAIB N. ABDUL LATIF
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.4
    • /
    • pp.1147-1161
    • /
    • 2023
  • The Internet of Things (IoT) requires a new processing model that will allow scalability in cloud computing while reducing time delay caused by data transmission within a network. Such a model can be achieved by using resources that are closer to the user, i.e., by relying on edge computing (EC). The amount of IoT data also grows with an increase in the number of IoT devices. However, building such a flexible model within a heterogeneous environment is difficult in terms of resources. Moreover, the increasing demand for IoT services necessitates shortening time delay and response time by achieving effective load balancing. IoT devices are expected to generate huge amounts of data within a short amount of time. They will be dynamically deployed, and IoT services will be provided to EC devices or cloud servers to minimize resource costs while meeting the latency and quality of service (QoS) constraints of IoT applications when IoT devices are at the endpoint. EC is an emerging solution to the data processing problem in IoT. In this study, we improve the load balancing process and distribute resources fairly to tasks, which, in turn, will improve QoS in cloud and reduce processing time, and consequently, response time.

Design and Implementation of a Hardware-based Transmission/Reception Accelerator for a Hybrid TCP/IP Offload Engine (하이브리드 TCP/IP Offload Engine을 위한 하드웨어 기반 송수신 가속기의 설계 및 구현)

  • Jang, Han-Kook;Chung, Sang-Hwa;Yoo, Dae-Hyun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.9
    • /
    • pp.459-466
    • /
    • 2007
  • TCP/IP processing imposes a heavy load on the host CPU when it is processed by the host CPU on a very high-speed network. Recently the TCP/IP Offload Engine (TOE), which processes TCP/IP on a network adapter instead of the host CPU, has become an attractive solution to reduce the load in the host CPU. There have been two approaches to implement TOE. One is the software TOE in which TCP/IP is processed by an embedded processor and the other is the hardware TOE in which TCP/IP is processed by a dedicated ASIC. The software TOE has poor performance and the hardware TOE is neither flexible nor expandable enough to add new features. In this paper we designed and implemented a hybrid TOE architecture, in which TCP/IP is processed by cooperation of hardware and software, based on an FPGA that has two embedded processor cores. The hybrid TOE can have high performance by processing time-critical operations such as making and processing data packets in hardware. The software based on the embedded Linux performs operations that are not time-critical such as connection establishment, flow control and congestions, thus the hybrid TOE can have enough flexibility and expandability. To improve the performance of the hybrid TOE, we developed a hardware-based transmission/reception accelerator that processes important operations such as creating data packets. In the experiments the hybrid TOE shows the minimum latency of about $19{\mu}s$. The CPU utilization of the hybrid TOE is below 6 % and the maximum bandwidth of the hybrid TOE is about 675 Mbps.

P2P-based Mobility Management Protocol for Global Seamless Handover in Heterogeneous Wireless Network (이기종망에서 글로벌 끊김 없는 핸드오버를 위한 P2P 기반 이동성 관리 프로토콜)

  • Chun, Seung-Man;Lee, Seung-Mu;Park, Jong-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.73-80
    • /
    • 2012
  • In this article, we propose a P2P-based mobility management protocol for global seamless handover in heterogeneous wireless networks. Unlike previous mobility management protocols such as IETF MIPv4/6 and its variants, the proposed protocol can support global seamless handover without changing the existing network infrastructure. The idea of the proposed protocol is that the location management function for mobility management is separately supported from packet forwarding function, and bidirectional IP tunnels for packet transmission are dynamically constructed between two end-to-end mobile hosts. In addition, early handover techniques have been developed to avoid large handover delays and packet losses using the IEEE 802.21 Media Independent Handover functions. The architecture and signaling procedure of the proposed protocol have been designed in detail, and the mathematical analysis and simulation have been done for performance evaluation. The performance results show that the proposed protocol outperforms the existing MIPv6 and HMIPv6 in terms of handover latency and packet loss.