• Title/Summary/Keyword: processing time

Search Result 14,502, Processing Time 0.047 seconds

Design of Image Processing Unit for Real Time Processing (Real Time Processing을 위한 Image Processing Unit의 설계)

  • 김진욱;김석태
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1998.11a
    • /
    • pp.194-197
    • /
    • 1998
  • Image Processing은 Image Data가 대량이고 내재된 정보가 병렬로 연관성을 가진다는 측면에서 실시간 처리가 용이하지 알다. 본 연구에서는 High Speed Real Time Image Processing을 위한 IPU(Image Processing Unit)와 이를 구동하기 위한 High Speed Real Time image Processing Language인 IPASM(Image Processing Assembly)을 제안한다. 우선 IPU의 기본개념을 설명하고 IPU의 구현을 위한 IPLU(Image Processing Logic Unit)를 설계한다. 그 후 Window98환경에서 구동 가능한 IPASM Interpreter를 실제로 제작하여 IPU의 동작방식을 간접적으로 진단한다.

  • PDF

Chaotic Behavior of a Single Machine Scheduling Problem with an Expected Mean Flow Time Measure (기대 평균흐름시간 최소화를 위한 단일설비 일정계획의 성능변동 분석)

  • Joo, Un Gi
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.41 no.1
    • /
    • pp.87-98
    • /
    • 2016
  • A single machine scheduling problem for jobs with stochastic processing time is considered in this study. Shortest processing time (SPT) sequencing according to the expected processing times of jobs is optimal for schedules with minimal expected mean flow time when all the jobs arrive to be scheduled and their expected processing times are known. However, SPT sequencing according to the expected processing time may not be optimal for the minimization of the mean flow time when the actual processing times of jobs are known. This study evaluates the complexity of SPT sequencing through a comparison of the mean flow times of schedules based on the expected processing times and actual processing times of randomly generated jobs. Evaluation results show that SPT sequencing according to the expected flow time exhibits chaotic variation to the optimal mean flow time. The relative deviation from the optimal mean flow time increases as the number of jobs, processing time, or coefficient of variation increases.

A Model for Performance Analysis of the Information Processing System with Time Constraint (시간제약이 있는 정보처리시스템의 성능분석 모형)

  • Hur, Sun;Joo, Kook-Sun;Jeong, Seok-Yun;Yun, Joo-Deok
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.36 no.2
    • /
    • pp.138-145
    • /
    • 2010
  • In this paper, we consider the information processing system, which organizes the collected data to meaningful information when the number of data collected from multiple sources reaches to a predetermined number, and performs any action by processing the collected data, or transmits to other devices or systems. We derive an analytical model to calculate the time until it takes to process information after starting to collect data. Therefore, in order to complete the processing data within certain time constraints, we develop some design criteria to control various parameters of the information processing system. Also, we analyze the discrete time model for packet switching networks considering data with no particular arrival nor drop pattern. We analyze the relationship between the number of required packets and average information processing time through numerical examples. By this, we show that the proposed model is able to design the system to be suitable for user's requirements being complementary the quality of information and the information processing time in the system with time constraints.

Squall: A Real-time Big Data Processing Framework based on TMO Model for Real-time Events and Micro-batch Processing (Squall: 실시간 이벤트와 마이크로-배치의 동시 처리 지원을 위한 TMO 모델 기반의 실시간 빅데이터 처리 프레임워크)

  • Son, Jae Gi;Kim, Jung Guk
    • Journal of KIISE
    • /
    • v.44 no.1
    • /
    • pp.84-94
    • /
    • 2017
  • Recently, the importance of velocity, one of the characteristics of big data (5V: Volume, Variety, Velocity, Veracity, and Value), has been emphasized in the data processing, which has led to several studies on the real-time stream processing, a technology for quick and accurate processing and analyses of big data. In this paper, we propose a Squall framework using Time-triggered Message-triggered Object (TMO) technology, a model that is widely used for processing real-time big data. Moreover, we provide a description of Squall framework and its operations under a single node. TMO is an object model that supports the non-regular real-time processing method for certain conditions as well as regular periodic processing for certain amount of time. A Squall framework can support the real-time event stream of big data and micro-batch processing with outstanding performances, as compared to Apache storm and Spark Streaming. However, additional development for processing real-time stream under multiple nodes that is common under most frameworks is needed. In conclusion, the advantages of a TMO model can overcome the drawbacks of Apache storm or Spark Streaming in the processing of real-time big data. The TMO model has potential as a useful model in real-time big data processing.

CPU Scheduling with a Round Robin Algorithm Based on an Effective Time Slice

  • Tajwar, Mohammad M.;Pathan, Md. Nuruddin;Hussaini, Latifa;Abubakar, Adamu
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.941-950
    • /
    • 2017
  • The round robin algorithm is regarded as one of the most efficient and effective CPU scheduling techniques in computing. It centres on the processing time required for a CPU to execute available jobs. Although there are other CPU scheduling algorithms based on processing time which use different criteria, the round robin algorithm has gained much popularity due to its optimal time-shared environment. The effectiveness of this algorithm depends strongly on the choice of time quantum. This paper presents a new effective round robin CPU scheduling algorithm. The effectiveness here lies in the fact that the proposed algorithm depends on a dynamically allocated time quantum in each round. Its performance is compared with both traditional and enhanced round robin algorithms, and the findings demonstrate an improved performance in terms of average waiting time, average turnaround time and context switching.

ANALYSIS ON PROCESSING PERFORMANCE OF COMS LHGS

  • Bae, Hee-Jin;Ahn, Sang-Il
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.105-108
    • /
    • 2007
  • The COMS LRIT/HRIT broadcast service should satisfy the 15 minutes timeliness requirement. The timeliness requirement is an important enough to impact on the overall performance of LHGS. Therefore, the simulation for the LHGS processing was performed with the LHGS prototype in this paper. First, processing time is measured for each process (per modules) of the LHGS without I/O time. Then, the LHGS processing is performed with worst scenario and the processing time is measured. Finally, analyses for processing time and time constraint are performed.

  • PDF

Query Processing based Branch Node Stream for XML Message Broker

  • Ko, Hye-Kyeong
    • International journal of advanced smart convergence
    • /
    • v.10 no.2
    • /
    • pp.64-72
    • /
    • 2021
  • XML message brokers have a lot of importance because XML has become a practical standard for data exchange in many applications. Message brokers covered in this document store many users. This paper is a study of the processing of twig pattern queries in XML documents using branching node streams in XML message broker structures. This work is about query processing in XML documents, especially for query processing with XML twig patterns in the XML message broker structure and proposed a method to reduce query processing time when parsing documents with XML twig patterns by processing information. In this paper, the twig pattern query processing method of documents using the branching node stream removes the twigging value of the branch node that does not include the labeling value of the branch node stream when it receives a twig query from the client. In this paper, the leaf node discovery time can be reduced by reducing the navigation time of nodes in XML documents that are matched to leaf nodes in twig queries for client twig queries. Overall, the overall processing time to respond to queries is reduced, allowing for rapid question-answer processing.

Development of Real-Time Image Processing System Using GPU (GPU를 이용한 실시간 이미지 프로세싱 시스템)

  • Oh Jae-Hong;Kang Hoon;Lee Ja-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.393-397
    • /
    • 2005
  • When a real-time image processing application is implemented with a general-purpose computer, CPU (Central Processing Unit) is usually heavily loaded and in many cases that CPU alone cannot meet the real-time requirement at all. Most modern computers are equipped with powerful Graphics Processing Units (GPUs) to accelerate graphics operations. There is a trend that the power of GPU outgrows that of CPU. If we take advantage of the powerful GPU for more general operations other than pure graphics operations, the processing time can be reduced. In this study, we will present techniques that apply GPU to general operations such as image processing procedures. Our experiment results show that significant speed-up can be achieved by using GPU.

QoS Analysis of a Distributed System Considering the Processing Time (처리시간을 고려한 분산시스템의 서비스 품질분석)

  • Kim, Jung-Ho;Park, Jong-Hun
    • Journal of Korean Society for Quality Management
    • /
    • v.39 no.3
    • /
    • pp.412-421
    • /
    • 2011
  • In this paper, we introduce Quality of Service(QoS) analytic model of a distributed system that decentralizes the process nodes performing each task and communicates through a network for cooperation. The model advances a service reliability model of Dai et a1.(2003) by means of considering the processing time. The service is assumed to be provided by a centralized heterogeneous distributed system which is composed of some subsystems managed by a control center. The QoS is defined as the probability that a service is provided successfully in an allowed time, we consider the hardware/software reliability and the processing time which include program execution time, data transfer time. We derive the processing time distribution for a required service through convolution of corresponding probability density function. An application example is used to explain the procedure of computing quality of service.

대학도서관 자료처리 원가계산에 관한 연구

  • 이경호;심의순
    • Journal of Korean Library and Information Science Society
    • /
    • v.10
    • /
    • pp.157-191
    • /
    • 1983
  • The purpose of the study is to build a general cost a counting model for university libraries, to clarify the possible areas of its a n.0, pplication by employing job cost accounting and process cost accounting methods. System analysis is performed as to the fields of acquisition, processing (cataloging & classification), and book shelving system. The existing operation processes and time required for each operation of these three systems are analyzed, from which detailed system flowcharts were drawn and job descriptions and the content of job were identified. The results of the study can be summarized as follows: (1) The processing time of one book in each systems: Oriental books. a. Acquisition system. the time required the time required in case of job cost case of job cost accounting after purchasing, 8 min. 21 sec. the time required in case of process cost accounting 15 min. 7 sec. b. Processing system. the time required for non-duplicate, 34 min. 40 sec. the time required for duplicate, 8 min. 49 sec. the time required for purchasing of more than the time required two copies at a time. 4 min. 44 sec. c. Book shelving system. the time required. 1 min. 43 sec. Western books. a. Acquisition system the required in case of job cost accounting, 9 min. 1 sec. the time required in case of process cost accounting. 15 min. 7 sec. b. Processing system. the time required for non-duplicate, 32 min. 58 sec. the time required for duplicate, 9 min. 26 sec. the time required for purchasing of more than two copies at a time. 5 min. 33 sec. c. Book shelving system. the time required. 1 min. 43 sec. (2) Total sum of processing time and processing cost per book. Oriental books (including material cost) the time required. cost. a. non-duplicate, 51 min. 30 sec. 2, 791 won b. duplicate, 25 min. 39 sec. 1, 580 won c. purchasing of more than two copies as a time, 21 min. 34 sec. 1, 368 won Western books(including material cost) a. non-duplicate, 49 min. 48 sec. 3, 189 won b. duplicate, 26 min. 16 sec. 1, 846 won c. purchasing of more than two copies at a time. 22 min. 23 sec. 1, 388 won

  • PDF