• Title/Summary/Keyword: total execution time

Search Result 149, Processing Time 0.024 seconds

A Genetic Algorithm for Minimizing Query Processing Time in Distributed Database Design: Total Time Versus Response Time (분산 데이타베이스에서의 질의실행시간 최소화를 위한 유전자알고리즘: 총 시간 대 반응시간)

  • Song, Suk-Kyu
    • The KIPS Transactions:PartD
    • /
    • v.16D no.3
    • /
    • pp.295-306
    • /
    • 2009
  • Query execution time minimization is an important objective in distributed database design. While total time minimization is an objective for On Line Transaction Processing (OLTP), response time minimization is for Decision Support queries. We formulate the sub-query allocation problem using analytical models and solve with genetic algorithm (GA). We show that query execution plans with total time minimization objective are inefficient from response time perspective and vice versa. The procedure is tested with simulation experiments for queries of up to 20 joins. Comparison with exhaustive enumeration indicates that GA produced optimal solutions in all cases in much less time.

Real-Time Control System

  • Gharbi, Atef
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.4
    • /
    • pp.19-27
    • /
    • 2021
  • Tasks scheduling have been gaining attention in both industry and research. The scheduling that ensures independent task execution is critical in real-time systems. While task scheduling has gained a lot of attention in recent years, there have been few works that have been implemented into real-time architecture. The efficiency of the classical scheduling strategy in real-time systems, in particular, is still understudied. To reduce total waiting time, we apply three scheduling approaches in this paper: First In/First Out (FIFO), Shortest Execution Time (SET), and Shortest-Longest Execution Time (SLET). Experimental results have demonstrated the efficacy of the SLET in comparison with the others in most cases in a wide range of configurations.

A Study on VLSI-Oriented 2-D Systolic Array Processor Design for APP (Algebraic Path Problem) (VLSI 지향적인 APP용 2-D SYSTOLIC ARRAY PROCESSOR 설계에 관한 연구)

  • 이현수;방정희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.7
    • /
    • pp.1-13
    • /
    • 1993
  • In this paper, the problems of the conventional special-purpose array processor such as the deficiency of flexibility have been investigated. Then, a new modified methodology has been suggested and applied to obtain the common solution of the three typical App algorithms like SP(Shortest Path), TC(Transitive Closure), and MST(Minimun Spanning Tree) among the various APP algorithms using the similar method to obtain the solution. In the newly proposed APP parallel algorithm, real-time Processing is possible, without the structure enhancement and the functional restriction. In addition, we design 2-demensional bit-parallel low-triangular systolic array processor and the 1-PE in detail. For its evaluation, we consider its computational complexity according to bit-processing method and describe relationship of total chip size and execution time. Therefore, the proposed processor obtains, on which a large data inputs in real-time, 3n-4 execution time which is optimal o(n) time complexity, o(n$^{2}$) space complexity which is the number of total gate and pipeline period rate is one.

  • PDF

Modified TDS (Task Duplicated based Scheduling) Scheme Optimizing Task Execution Time (태스크 실행 시간을 최적화한 개선된 태스크 중복 스케줄 기법)

  • Jang, Sei-Ie;Kim, Sung-Chun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.6
    • /
    • pp.549-557
    • /
    • 2000
  • Distributed Memory Machine(DMM) is necessary for the effective computation of the data which is complicated and very large. Task scheduling is a method that reduces the communication time among tasks to reduce the total execution time of application program and is very important for the improvement of DMM. Task Duplicated based Scheduling(TDS) method improves execution time by reducing communication time of tasks. It uses clustering method which schedules tasks of the large communication time on the same processor. But there is a problem that cannot optimize communication time between task sending data and task receiving data. Hence, this paper proposes a new method which solves the above problem in TDS. Modified Task Duplicated based Scheduling(MTDS) method which can approximately optimize the communication time between task sending data and task receiving data by checking the optimal condition, resulted in the minimization of task execution time by reducing the communication time among tasks. Also system modeling shows that task execution time of MTDS is about 70% faster than that of TDS in the best case and the same as the result of TDS in the worst case. It proves that MTDS method is better than TDS method.

  • PDF

A deferring strategy to improve schedulability for the imprecise convergence on-line tasks (부정확한 융복합 온라인 태스크들의 스케쥴가능성을 향상시키기 위한 지연 전략)

  • Song, Gi-Hyeon
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.2
    • /
    • pp.15-20
    • /
    • 2021
  • The imprecise real-time scheduling can be used for minimizing the bad effects of timing faults by leaving less important tasks unfinished if necessary when a transient overload occured. In the imprecise scheduling, every time-critical task can be logically decomposed into two tasks : a mandatory task and an optional task. Recently, some studies in this field showed good schedulability performance and minimum total error by deferring the optional tasks. But the schedulability performance of the studies can be shown only when the execution time of each optional task was less than or equal to the execution time of its corresponding mandatory task. Therefore, in this paper, a new deferring strategy is proposed under the reverse execution time restriction to the previous studies. Nevertheless, the strategy produces comparable or superior schedulability performance to the previous studies and can minimize the total error also.

Implementation of a Labview Based Time-Frequency Domain Reflectometry Real Time System using the PXI Modules (PXI모듈을 이용한 랩뷰 기반 시간-주파수 영역 반사파 실시간 계측 시스템 구현)

  • Park, Tae-Geun;Kwak, Ki-Seok;Park, Jin-Bae;Yoon, Tae-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.336-338
    • /
    • 2006
  • One of the important topics concerning the safety of electrical and electronic system is the reliability of the wiring system. The Time-Frequency Domain Reflectometry(TFDR) is a state-of-the-art system for detection and estimation of the fault on a wiring/cable. The purpose of this paper is to implement a Labview based TFDR Real Time system though the instruments of PCI extensions for Instrumentation(PXI). The TFDR Real Time system consists of the five parts: Reference signal design, signal generation, signal acquisition, algorithm execution, results diplay part. In the signal generation and acquisition parts we adopt the Arbitrary Waveform Generator(AWG) and Digital Storage Oscilloscope(DSO) PXI modules which offer commonality, compatibility and easy integration at low cost. And execution of the PXI modules not only is controlled by the Labview programing but also the total system process is executed by the Labview application software.

  • PDF

The Bayesian Approach of Software Optimal Release Time Based on Log Poisson Execution Time Model (포아송 실행시간 모형에 의존한 소프트웨어 최적방출시기에 대한 베이지안 접근 방법에 대한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.7
    • /
    • pp.1-8
    • /
    • 2009
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. The optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement is generally accepted. The Bayesian parametric inference of model using log Poisson execution time employ tool of Markov chain(Gibbs sampling and Metropolis algorithm). In a numerical example by T1 data was illustrated. make out estimating software optimal release time from the maximum likelihood estimation and Bayesian parametric estimation.

Real-Time Job Scheduling Strategy for Grid Computing (그리드 컴퓨팅을 위한 실시간 작업 스케줄링 정책)

  • Choe, Jun-Young;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.2
    • /
    • pp.1-8
    • /
    • 2010
  • In this paper, we propose a scheduling strategy for grid environment that reduces resource cost. This strategy considers resource cost and job failure rate to efficiently allocate local computing resources. The key idea of our strategy is that we use two-level scheduling using remote and local scheduler. The remote scheduler determines the expected total execution times of jobs using the current network and local system status maintained in its resource database and allocates jobs with minimum total execution time to local systems. The local scheduler recalculates the waiting time and execution time of allocated job and uses it to determine whether the job can be processed within the specified deadline. If it cannot finish in time, the job is migrated other local systems, through simulation, we show that it is more effective to reduce the resource cost than the previous Greedy strategy. We also show that the proposed strategy improves the performance compared to previous Greedy strategy.

A Relative Performance Index-based Job Migration in Grid Computing Environment (그리드 컴퓨팅 환경에서의 상대성능지수에 기반한 작업 이주)

  • Kim Young-Gyun;Oh Gil-Ho;Cho Kum Won;Ko Soon-Heum
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.4
    • /
    • pp.293-304
    • /
    • 2005
  • In this paper, we research on job migration in a grid computing environment with cactus and MPICH-C2 based on Globus. Our concepts are to perform job migration by finding the site with plenty of computational resources that would decrease execution time in a grid computing environment. The Migration Manager recovers the job from the checkpointing files and restarts the job on the migrated site. To select a migrating site, the proposed method considers system's performance index, cpu's load, network traffic to send migration job tiles and the execution time predicted on a migration site. Then it selects a site with maximal performance gains. By selecting a site with minimum migration time and minimum execution time. this approach implements a more efficient grid computing environment. The proposed method Is proved by effectively decreasing total execution time at the $K\ast{Grid}$.

An Efficient Scheduling Method Taking into Account Resource Usage Patterns on Desktop Grids (데스크탑 그리드에서 자원 사용 경향성을 고려한 효율적인 스케줄링 기법)

  • Hyun Ju-Ho;Lee Sung-Gu;Kim Sang-Cheol;Lee Min-Gu
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.7
    • /
    • pp.429-439
    • /
    • 2006
  • A desktop grid, which is a computing grid composed of idle computing resources in a large network of desktop computers, is a promising platform for compute-intensive distributed computing applications. However, due to reliability and unpredictability of computing resources, effective scheduling of parallel computing applications on such a platform is a difficult problem. This paper proposes a new scheduling method aimed at reducing the total execution time of a parallel application on a desktop grid. The proposed method is based on utilizing the histories of execution behavior of individual computing nodes in the scheduling algorithm. In order to test out the feasibility of this idea, execution trace data were collected from a set of 40 desktop workstations over a period of seven weeks. Then, based on this data, the execution of several representative parallel applications were simulated using trace-driven simulation. The simulation results showed that the proposed method improves the execution time of the target applications significantly when compared to previous desktop grid scheduling methods. In addition, there were fewer instances of application suspension and failure.