• Title/Summary/Keyword: computation intensive task

Search Result 12, Processing Time 0.022 seconds

A Design of Superscalar Digital Signal Processor (다중 명령어 처리 DSP 설계)

  • Park, Sung-Wook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.3
    • /
    • pp.323-328
    • /
    • 2008
  • This paper presents a Digital Signal Processor achieving high through-put for both decision intensive and computation intensive tasks. The proposed processor employees a multiplier, two ALU and load/store. Unit as operational units. Those four units are controlled and works parallel by superscalar control scheme, which is different from prior DSP architecture. The performance evaluation was done by implementing AC-3 decoding algorithm and 37.8% improvement was achieved. This study is valuable especially for the consumer electronics applications, which require very low cost.

Optimizing Energy-Latency Tradeoff for Computation Offloading in SDIN-Enabled MEC-based IIoT

  • Zhang, Xinchang;Xia, Changsen;Ma, Tinghuai;Zhang, Lejun;Jin, Zilong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.4081-4098
    • /
    • 2022
  • With the aim of tackling the contradiction between computation intensive industrial applications and resource-weak Edge Devices (EDs) in Industrial Internet of Things (IIoT), a novel computation task offloading scheme in SDIN-enabled MEC based IIoT is proposed in this paper. With the aim of reducing the task accomplished latency and energy consumption of EDs, a joint optimization method is proposed for optimizing the local CPU-cycle frequency, offloading decision, and wireless and computation resources allocation jointly. Based on the optimization, the task offloading problem is formulated into a Mixed Integer Nonlinear Programming (MINLP) problem which is a large-scale NP-hard problem. In order to solve this problem in an accessible time complexity, a sub-optimal algorithm GPCOA, which is based on hybrid evolutionary computation, is proposed. Outcomes of emulation revel that the proposed method outperforms other baseline methods, and the optimization result shows that the latency-related weight is efficient for reducing the task execution delay and improving the energy efficiency.

Cost-Aware Scheduling of Computation-Intensive Tasks on Multi-Core Server

  • Ding, Youwei;Liu, Liang;Hu, Kongfa;Dai, Caiyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5465-5480
    • /
    • 2018
  • Energy-efficient task scheduling on multi-core server is a fundamental issue in green cloud computing. Multi-core processors are widely used in mobile devices, personal computers, and servers. Existing energy efficient task scheduling methods chiefly focus on reducing the energy consumption of the processor itself, and assume that the cores of the processor are controlled independently. However, the cores of some processors in the market are divided into several voltage islands, in each of which the cores must operate on the same status, and the cost of the server includes not only energy cost of the processor but also the energy of other components of the server and the cost of user waiting time. In this paper, we propose a cost-aware scheduling algorithm ICAS for computation intensive tasks on multi-core server. Tasks are first allocated to cores, and optimal frequency of each core is computed, and the frequency of each voltage island is finally determined. The experiments' results show the cost of ICAS is much lower than the existing method.

Efficient Task Offloading Decision Based on Task Size Prediction Model and Genetic Algorithm

  • Quan T. Ngo;Dat Van Anh Duong;Seokhoon Yoon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.16-26
    • /
    • 2024
  • Mobile edge computing (MEC) plays a crucial role in improving the performance of resource-constrained mobile devices by offloading computation-intensive tasks to nearby edge servers. However, existing methods often neglect the critical consideration of future task requirements when making offloading decisions. In this paper, we propose an innovative approach that addresses this limitation. Our method leverages recurrent neural networks (RNNs) to predict task sizes for future time slots. Incorporating this predictive capability enables more informed offloading decisions that account for upcoming computational demands. We employ genetic algorithms (GAs) to fine-tune fitness functions for current and future time slots to optimize offloading decisions. Our objective is twofold: minimizing total processing time and reducing energy consumption. By considering future task requirements, our approach achieves more efficient resource utilization. We validate our method using a real-world dataset from Google-cluster. Experimental results demonstrate that our proposed approach outperforms baseline methods, highlighting its effectiveness in MEC systems.

Energy-Efficient Resource Allocation for Application Including Dependent Tasks in Mobile Edge Computing

  • Li, Yang;Xu, Gaochao;Ge, Jiaqi;Liu, Peng;Fu, Xiaodong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.6
    • /
    • pp.2422-2443
    • /
    • 2020
  • This paper studies a single-user Mobile Edge Computing (MEC) system where mobile device (MD) includes an application consisting of multiple computation components or tasks with dependencies. MD can offload part of each computation-intensive latency-sensitive task to the AP integrated with MEC server. In order to accomplish the application faultlessly, we calculate out the optimal task offloading strategy in a time-division manner for a predetermined execution order under the constraints of limited computation and communication resources. The problem is formulated as an optimization problem that can minimize the energy consumption of mobile device while satisfying the constraints of computation tasks and mobile device resources. The optimization problem is equivalently transformed into solving a nonlinear equation with a linear inequality constraint by leveraging the Lagrange Multiplier method. And the proposed dual Bi-Section Search algorithm Bi-JOTD can efficiently solve the nonlinear equation. In the outer Bi-Section Search, the proposed algorithm searches for the optimal Lagrangian multiplier variable between the lower and upper boundaries. The inner Bi-Section Search achieves the Lagrangian multiplier vector corresponding to a given variable receiving from the outer layer. Numerical results demonstrate that the proposed algorithm has significant performance improvement than other baselines. The novel scheme not only reduces the difficulty of problem solving, but also obtains less energy consumption and better performance.

An Efficient Algorithm for Mining Frequent Sequences In Spatiotemporal Data

  • Vhan Vu Thi Hong;Chi Cheong-Hee;Ryu Keun-Ho
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2005.11a
    • /
    • pp.61-66
    • /
    • 2005
  • Spatiotemporal data mining represents the confluence of several fields including spatiotemporal databases, machine loaming, statistics, geographic visualization, and information theory. Exploration of spatial data mining and temporal data mining has received much attention independently in knowledge discovery in databases and data mining research community. In this paper, we introduce an algorithm Max_MOP for discovering moving sequences in mobile environment. Max_MOP mines only maximal frequent moving patterns. We exploit the characteristic of the problem domain, which is the spatiotemporal proximity between activities, to partition the spatiotemporal space. The task of finding moving sequences is to consider all temporally ordered combination of associations, which requires an intensive computation. However, exploiting the spatiotemporal proximity characteristic makes this task more cornputationally feasible. Our proposed technique is applicable to location-based services such as traffic service, tourist service, and location-aware advertising service.

  • PDF

Task offloading scheme based on the DRL of Connected Home using MEC (MEC를 활용한 커넥티드 홈의 DRL 기반 태스크 오프로딩 기법)

  • Ducsun Lim;Kyu-Seek Sohn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.61-67
    • /
    • 2023
  • The rise of 5G and the proliferation of smart devices have underscored the significance of multi-access edge computing (MEC). Amidst this trend, interest in effectively processing computation-intensive and latency-sensitive applications has increased. This study investigated a novel task offloading strategy considering the probabilistic MEC environment to address these challenges. Initially, we considered the frequency of dynamic task requests and the unstable conditions of wireless channels to propose a method for minimizing vehicle power consumption and latency. Subsequently, our research delved into a deep reinforcement learning (DRL) based offloading technique, offering a way to achieve equilibrium between local computation and offloading transmission power. We analyzed the power consumption and queuing latency of vehicles using the deep deterministic policy gradient (DDPG) and deep Q-network (DQN) techniques. Finally, we derived and validated the optimal performance enhancement strategy in a vehicle based MEC environment.

Performance Comparison of Task Partitioning Methods in MEC System (MEC 시스템에서 태스크 파티셔닝 기법의 성능 비교)

  • Moon, Sungwon;Lim, Yujin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.5
    • /
    • pp.139-146
    • /
    • 2022
  • With the recent development of the Internet of Things (IoT) and the convergence of vehicles and IT technologies, high-performance applications such as autonomous driving are emerging, and multi-access edge computing (MEC) has attracted lots of attentions as next-generation technologies. In order to provide service to these computation-intensive tasks in low latency, many methods have been proposed to partition tasks so that they can be performed through cooperation of multiple MEC servers(MECSs). Conventional methods related to task partitioning have proposed methods for partitioning tasks on vehicles as mobile devices and offloading them to multiple MECSs, and methods for offloading them from vehicles to MECSs and then partitioning and migrating them to other MECSs. In this paper, the performance of task partitioning methods using offloading and migration is compared and analyzed in terms of service delay, blocking rate and energy consumption according to the method of selecting partitioning targets and the number of partitioning. As the number of partitioning increases, the performance of the service delay improves, but the performance of the blocking rate and energy consumption decreases.

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

Efficient Resource Allocation Strategies Based on Nash Bargaining Solution with Linearized Constraints (선형 제약 조건화를 통한 내쉬 협상 해법 기반 효율적 자원 할당 방법)

  • Choi, Jisoo;Jung, Seunghyun;Park, Hyunggon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.3
    • /
    • pp.463-468
    • /
    • 2016
  • The overall performance of multiuser systems significantly depends on how effectively and fairly manage resources shared by them. The efficient resource management strategies are even more important for multimedia users since multimedia data is delay-sensitive and massive. In this paper, we focus on resource allocation based on a game-theoretic approach, referred to as Nash bargaining solution (NBS), to provide a quality of service (QoS) guarantee for each user. While the NBS has been known as a fair and optimal resource management strategy, it is challenging to find the NBS efficiently due to the computationally-intensive task. In order to reduce the computation requirements for NBS, we propose an approach that requires significantly low complexity even when networks consist of a large number of users and a large amount of resources. The proposed approach linearizes utility functions of each user and formulates the problem of finding NBS as a convex optimization, leading to nearly-optimal solution with significantly reduced computation complexity. Simulation results confirm the effectiveness of the proposed approach.