• 제목/요약/키워드: real-time task scheduling

Search Result 206, Processing Time 0.021 seconds

Optimization Techniques for Power-Saving in Real-Time IoT Systems using Fast Storage Media (고속 스토리지를 이용한 실시간 IoT 시스템의 전력 절감 최적화 기술)

  • Yoon, Suji;Park, Heejin;Cho, Kyungwoon;Bahn, Hyokyung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.71-76
    • /
    • 2021
  • Recently, as the size of IoT data grows, the memory power consumption of real-time systems increases rapidly. This is because real-time systems always place entire tasks in memory, which increases the demand of DRAM significantly. In this paper, we adopt emerging fast storage media and move a certain portion of real-time tasks from DRAM to storage. The part of tasks in storage are, then, loaded into memory when they are actually used. We incorporate our memory/storage power-saving into the dynamic voltage/frequency scaling of processors, thereby optimizing power consumptions in CPU and memory simultaneously. Specifically, the proposed technique aims at minimizing the CPU idle time and the DRAM memory size by determining appropriate voltage modes of CPU and the swap ratio of memory, without violating the deadlines of all tasks. Through simulation experiments, we show that the proposed technique significantly reduces the power consumption of real-time systems.

Low-Energy Intra-Task Voltage Scheduling using Static Timing Analysis (정적 시간 분석을 이용한 저전력 태스크내 전압 스케줄링)

  • Sin, Dong-Gun;Kim, Ji-Hong;Lee, Seong-Su
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.11
    • /
    • pp.561-572
    • /
    • 2001
  • Since energy consumption of CMOS circuits has a quadratic dependency on the supply voltage, lowering the supply voltage is the most effective way of reducing energy consumption. We propose an intra-task voltage scheduling algorithm for low-energy hard real-time applications. Based on a static timing analysis technique, the proposed algorithm controls the supply voltage within an individual task boundary. By fully exploiting all the slack times, as scheduled program by the proposed algorithm always complete its execution near the deadline, thus achieving a high energy reduction ratio. In order to validate the effectiveness of the proposed algorithm, we built a software tool that automatically converts a DVS-unaware program into an equivalent low-energy program. Experimental results show that the low-energy version of an MPEG-4 encoder/decoder (converted by the software tool) consumes less than 7~25% of the original program running on a fixed-voltage system with a power-down mode.

  • PDF

Robust Wireless Sensor and Actuator Network for Critical Control System (크리티컬한 제어 시스템용 고강건 무선 센서 액추에이터 네트워크)

  • Park, Pangun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1477-1483
    • /
    • 2020
  • The stability guarantee of wireless network based control systems is still challenging due to the lossy links and node failures. This paper proposes a hierarchical cluster-based network protocol called robust wireless sensor and actuator network (R-WSAN) by combining time, channel, and space resource diversity. R-WSAN includes a scheduling algorithm to support the network resource allocation and a control task sharing scheme to maintain the control stability of multiple plants. R-WSAN was implemented on a real test-bed using Zolertia RE-Mote embedded hardware platform running the Contiki-NG operating system. Our experimental results demonstrate that R-WSAN provides highly reliable and robust performance against lossy links and node failures. Furthermore, the proposed scheduling algorithm and the task sharing scheme meet the stability requirement of control systems, even if the controller fails to support the control task.

Low Power EccEDF Algorithm for Real-Time Operating Systems (실시간 운영체제를 위한 저전력 EccEDF 알고리듬)

  • Lee, Min-Seok;Lee, Cheol-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.1
    • /
    • pp.31-43
    • /
    • 2015
  • For battery based real-time embedded systems, high performance to meet their real-time constraints and energy efficiency to extend battery life are both essential. Real-Time Dynamic Voltage Scaling (RT-DVS) has been a key technique to satisfy both requirements. In this paper, we present an efficient RT-DVS algorithm called EccEDF that is designed based on ccEDF. The proposed algorithm can precisely calculate the maximum unused utilization with consideration of the elapsed time while keeping the structural simplicity of ccEDF, which overlooked the time needed to run the task in calculating the available slack. The maximum unused utilization can be calculated by dividing remaining execution time($C_i-cc_i$) by remaining time($P_i-E_i$) on completion of the task and it is proved using Fluid scheduling model. We also show that the algorithm outperforms ccEDF in practical applications which is modelled using a PXA250 and a 0.28V-to-1.2V wide-operating-range IA-32 processor model.

The Design of Expanded Time Slice Task Scheduling in $\mu$ C/OS-II ($\mu$ C/OS-II에서의 확장된 시분할 스케줄링 설계)

  • 김태호;김창수;박철동;장용호
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.8-11
    • /
    • 2001
  • 다수의 RTOS(Real-Time Operating Systems)가 개발되면서 다양한 분야에서 적용되고 있다. 현재 RTOS는 가전제품과 같은 특정 목적의 장치들을 지원하기 위해 설계되어 있기 때문에 목적에 따라 설계 방향이 다양하게 개발되어 있다. 본 논문에서는 $\mu$ C/OS-II 대전 2.03 환경에서 동일한 우선 순위를 가진 여러 개의 태스크들을 관리하는 방법과 우선 순위에 기반한 선점 메커니즘을 지원하는 $\mu$ C/OS-II에서 동일 우선 순위를 가진 태스크들을 동작하도록 하는 기능을 추가하였다. 이를 위해 본 연구에서는 $\mu$ C/OS-II의 커널 구조를 변경하여 시뮬레이션을 수행하였다.

  • PDF

Performance Evaluation and Analysis of Multiple Scenarios of Big Data Stream Computing on Storm Platform

  • Sun, Dawei;Yan, Hongbin;Gao, Shang;Zhou, Zhangbing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.2977-2997
    • /
    • 2018
  • In big data era, fresh data grows rapidly every day. More than 30,000 gigabytes of data are created every second and the rate is accelerating. Many organizations rely heavily on real time streaming, while big data stream computing helps them spot opportunities and risks from real time big data. Storm, one of the most common online stream computing platforms, has been used for big data stream computing, with response time ranging from milliseconds to sub-seconds. The performance of Storm plays a crucial role in different application scenarios, however, few studies were conducted to evaluate the performance of Storm. In this paper, we investigate the performance of Storm under different application scenarios. Our experimental results show that throughput and latency of Storm are greatly affected by the number of instances of each vertex in task topology, and the number of available resources in data center. The fault-tolerant mechanism of Storm works well in most big data stream computing environments. As a result, it is suggested that a dynamic topology, an elastic scheduling framework, and a memory based fault-tolerant mechanism are necessary for providing high throughput and low latency services on Storm platform.

Hardware-Software Cosynthesis of Multitask Multicore SoC with Real-Time Constraints (실시간 제약조건을 갖는 다중태스크 다중코어 SoC의 하드웨어-소프트웨어 통합합성)

  • Lee Choon-Seung;Ha Soon-Hoi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.592-607
    • /
    • 2006
  • This paper proposes a technique to select processors and hardware IPs and to map the tasks into the selected processing elements, aming to achieve high performance with minimal system cost when multitask applications with real-time constraints are run on a multicore SoC. Such technique is called to 'Hardware-Software Cosynthesis Technique'. A cosynthesis technique was already presented in our early work [1] where we divide the complex cosynthesis problem into three subproblems and conquer each subproblem separately: selection of appropriate processing components, mapping and scheduling of function blocks to the selected processing component, and schedulability analysis. Despite good features, our previous technique has a serious limitation that a task monopolizes the entire system resource to get the minimum schedule length. But in general we may obtain higher performance in multitask multicore system if independent multiple tasks are running concurrently on different processor cores. In this paper, we present two mapping techniques, task mapping avoidance technique(TMA) and task mapping pinning technique(TMP), which are applicable for general cases with diverse operating policies in a multicore environment. We could obtain significant performance improvement for a multimedia real-time application, multi-channel Digital Video Recorder system and for randomly generated multitask graphs obtained from the related works.

An Improved Online Algorithm to Minimize Total Error of the Imprecise Tasks with 0/1 Constraint (0/1 제약조건을 갖는 부정확한 태스크들의 총오류를 최소화시키기 위한 개선된 온라인 알고리즘)

  • Song, Gi-Hyeon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.10
    • /
    • pp.493-501
    • /
    • 2007
  • The imprecise real-time system provides flexibility in scheduling time-critical tasks. Most scheduling problems of satisfying both 0/1 constraint and timing constraints, while the total error is minimized, are NP-complete when the optional tasks have arbitrary processing times. Liu suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on uniprocessors for minimizing the total error. Song et at suggested a reasonable strategy of scheduling tasks with the 0/1 constraint on multiprocessors for minimizing the total error. But, these algorithms are all off-line algorithms. In the online scheduling, the NORA algorithm can find a schedule with the minimum total error for the imprecise online task system. In NORA algorithm, EDF strategy is adopted in the optional scheduling. On the other hand, for the task system with 0/1 constraint, EDF_Scheduling may not be optimal in the sense that the total error is minimized. Furthermore, when the optional tasks are scheduled in the ascending order of their required processing times, NORA algorithm which EDF strategy is adopted may not produce minimum total error. Therefore, in this paper, an online algorithm is proposed to minimize total error for the imprecise task system with 0/1 constraint. Then, to compare the performance between the proposed algorithm and NORA algorithm, a series of experiments are performed. As a conseqence of the performance comparison between two algorithms, it has been concluded that the proposed algorithm can produce similar total error to NORA algorithm when the optional tasks are scheduled in the random order of their required processing times but, the proposed algorithm can produce less total error than NORA algorithm especially when the optional tasks are scheduled in the ascending order of their required processing times.

Design and Implementation of Low-Power Transcoding Servers Based on Transcoding Task Distribution (트랜스코딩 작업의 분배를 활용한 저전력 트랜스코딩 서버 설계 및 구현)

  • Lee, Dayoung;Song, Minseok
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.4
    • /
    • pp.18-29
    • /
    • 2019
  • A dynamic adaptive streaming server consumes high processor power because it handles a large amount of transcoding operations at a time. For this purpose, multi-processor architecture is mandatory for which effective transcoding task distribution strategies are essential. In this paper, we present the design and implementation details of the transcoding workload distribution schemes at a 2-tier (frontend node and backend node) transcoding server. For this, we implemented four schemes: 1) allocation of transcoding tasks to appropriate back-end nodes, 2) task scheduling in the back-end node and 3) the communication between front-end and back-end nodes. Experiments were conducted to compare the estimated and the actual power consumption in a real testbed to verify the efficacy of the system. It also proved that the system can reduce the load on each node to optimize the power and time used for transcoding.

Risk Management System based on Grid Computing for the Improvement of System Efficiency (시스템 효율성 증대를 위한 그리드 컴퓨팅 기반의 위험 관리 시스템)

  • Jung, Jae-Hun;Kim, Sin-Ryeong;Kim, Young-Gon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.283-290
    • /
    • 2016
  • As the development of recent science and technology, high-performance computing resources is needed to solve complex problems. To reach these requirements, it has been actively studied about grid computing that consist of a huge system which bind a heterogeneous high performance computing resources into on which are geographically dispersed. However, The current research situation which are the process to obtain the best results in the limited resources and the scheduling policy to accurately predict the total execution time of the real-time task are very poor. In this paper, in order to overcome these problems, we suggested a grid computing-based risk management system which derived from the system structure and the process for improving the efficiency of the system, grid computing-based working methodology, risk policy module which can manage efficiently the problem of the work of resources(Agent), scheduling technique and allocation method which can re-allocate the resource allocation and the resources in problem, and monitoring which can manage resources(Agent).