• Title/Summary/Keyword: Parallel distributed Processing

Search Result 260, Processing Time 0.026 seconds

Real-Time Monitoring of Resource for Distributed/Parallel Framework on the Web (웹 기반 분산/병렬 프레임워크상에서 실시간 자원 모니터링)

  • Kim, Su-Ja;Jeong, Jae-Hong;Song, Eun-Ha;Han, Sung-Kook;Joo, Su-Chong;Jeong, Young-Sik
    • Annual Conference of KIPS
    • /
    • 2003.05a
    • /
    • pp.117-120
    • /
    • 2003
  • 웹의 다양한 자원을 이용하여 고성능 작업 처리를 요구하는 분산/병렬 시스템은 균형적인 작업 할당을 위해 각 호스트의 성능 평가가 중요하다. 하지만 성능 평가에 대한 지속적인 신뢰하기가 어려우며 뿐만 아니라, 작업 도중 호스트의 성능 변화를 예측하기가 어렵다. 성능 변화에 따른 효율적인 작업 스케줄링이 필요하며, 자원 관리자는 작업을 수행중인 호스트에 대한 모니터가 요구된다. 본 논문에서는 자원 관리자와 시스템 관리자에게 효율적인 자원 정책을 제안하기 위해 각 호스트의 자원을 모니터하고, 분산/병렬 시스템의 작업 할당 메커니즘에 의해 각 호스트의 성능 평가 기준을 정한다 또한 관리자에게 실시간으로 호스트의 성능 변화에 따른 자원 정보를 관리하도록 다양한 시각화를 제공한다.

  • PDF

Continuous Control Message Exchange in Distributed Cognitive Radio Networks

  • Arega, Zerabruk G.;Kim, Bosung;Roh, Byeong-hee
    • Annual Conference of KIPS
    • /
    • 2014.04a
    • /
    • pp.206-209
    • /
    • 2014
  • Control message exchange is major job for cognitive radio to exist and use spectrum opportunistically. For this control message exchange they need a common control channel (CCC). Once this channel is affected by a primary user, communication stops until new CCC is setup. This takes substantial time and if they could not get free channel, this halt continues for long time. To prevent such cease of communication, we propose a combination of two networks, namely WLAN and UWB, to let the communication continue. In our proposed idea if the CCC of a certain CR in WLAN is affected, the CR changes its network from WLAN to UWB and keeps the communication because UWB cannot be affected by PU. In the proposed idea every cognitive radio has two transceivers; one for the overlay network (WLAN) and another UWB network. If a primary user is detected in the spectrum of a cognitive radio, it continues exchanging control messages under the UWB network and in parallel negotiates for a new CCC using the WLAN network. This idea solves the communication interruption until new CCC is setup.

Real-Time Stock Price Prediction using Apache Spark (Apache Spark를 활용한 실시간 주가 예측)

  • Dong-Jin Shin;Seung-Yeon Hwang;Jeong-Joon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.79-84
    • /
    • 2023
  • Apache Spark, which provides the fastest processing speed among recent distributed and parallel processing technologies, provides real-time functions and machine learning functions. Although official documentation guides for these functions are provided, a method for fusion of functions to predict a specific value in real time is not provided. Therefore, in this paper, we conducted a study to predict the value of data in real time by fusion of these functions. The overall configuration is collected by downloading stock price data provided by the Python programming language. And it creates a model of regression analysis through the machine learning function, and predicts the adjusted closing price among the stock price data in real time by fusing the real-time streaming function with the machine learning function.

Real-time multi-GPU-based 8KVR stitching and streaming on 5G MEC/Cloud environments

  • Lee, HeeKyung;Um, Gi-Mun;Lim, Seong Yong;Seo, Jeongil;Gwak, Moonsung
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.62-72
    • /
    • 2022
  • In this study, we propose a multi-GPU-based 8KVR stitching system that operates in real time on both local and cloud machine environments. The proposed system first obtains multiple 4 K video inputs, decodes them, and generates a stitched 8KVR video stream in real time. The generated 8KVR video stream can be downloaded and rendered omnidirectionally in player apps on smartphones, tablets, and head-mounted displays. To speed up processing, we adopt group-of-pictures-based distributed decoding/encoding and buffering with the NV12 format, along with multi-GPU-based parallel processing. Furthermore, we develop several algorithms such as equirectangular projection-based color correction, real-time CG overlay, and object motion-based seam estimation and correction, to improve the stitching quality. From experiments in both local and cloud machine environments, we confirm the feasibility of the proposed 8KVR stitching system with stitching speed of up to 83.7 fps for six-channel and 62.7 fps for eight-channel inputs. In addition, in an 8KVR live streaming test on the 5G MEC/cloud, the proposed system achieves stable performances with 8 K@30 fps in both indoor and outdoor environments, even during motion.

The Application and Integration of an Improvement Technique for Layers of NETCONF (NETCONF 계층에 대한 개선 기법 적용 및 통합)

  • Lee, YangMin;Lee, JaeKee
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.256-268
    • /
    • 2016
  • Modern networks consisting of various heterogeneous equipment are often installed in a distributed manner. Thus the NETCONF standard was established to manage networks centrally and efficiently. In this paper, we present a method that integrates each NETCONF layer into a single system based on the results of previous studies. In the RPC Layer, an asynchronous communication channel and parallel processes are possible using multi-threading. In the Operation Layer, operational efficiency is increased by using a data group with dependencies between the equipment configuration data and by improving the data structure, enabling efficiently processing of XML queries even with multiple managers. The data modeling techniques and grouping methods in the Content Layer are presented in detail for interoperability between the Operation Layer and the Content Layer. Finally, the GUI program was implemented and its implementation is reported. We performed an experiment comparing the improved NETCONF with the standard NETCONF to measure factors, such as query processing ratio, query processing speed, and CPU utilization. The improved NETCONF demonstrated excellent query processing ratio and query processing speed, whereas the standard NETCONF had excellent CPU utilization.

A Novel High Performance List Scheduling Algorithm for Distributed Heterogeneous Computing Systems (분산 이기종 컴퓨팅 시스템을 위한 새로운 고성능 리스트 스케줄링 알고리즘)

  • Yoon, Wan-Oh;Yoon, Jun-Chul;Yoon, Jung-Hee;Choi, Sang-Bang
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.135-145
    • /
    • 2010
  • Efficient Directed Acyclic Graph(DAG) scheduling is critical for achieving high performance in Distributed Heterogeneous computing System(DHCS). In this paper, we present a new high-performance scheduling algorithm, called the LCFT(Levelized Critical First Task) algorithm, for DHCS. The LCFT algorithm is a list-based scheduling that uses a new attribute to efficiently select tasks for scheduling in DHCS. The complexity of LCFT is $O(\upsilon+e)(p+log\;\upsilon)$. The performance of the algorithm has been observed by its application to some practical DAGs, and by comparing it with other existing scheduling algorithms such as PETS, HPS, HCPT and GCA in terms of the schedule length and SpeedUp. The comparison studies show that LCFT significantly outperforms PETS, HPS, HCPT and GCA in schedule length, SpeedUp.

SSQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark SQL (SSQUSAR : Apache Spark SQL을 이용한 대용량 정성 공간 추론기)

  • Kim, Jonghoon;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.2
    • /
    • pp.103-116
    • /
    • 2017
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner, which can derive new qualitative spatial knowledge representing both topological and directional relationships between two arbitrary spatial objects in efficient way using Aparch Spark SQL. Apache Spark SQL is well known as a distributed parallel programming environment which provides both efficient join operations and query processing functions over a variety of data in Hadoop cluster computer systems. In our spatial reasoner, the overall reasoning process is divided into 6 jobs such as knowledge encoding, inverse reasoning, equal reasoning, transitive reasoning, relation refining, knowledge decoding, and then the execution order over the reasoning jobs is determined in consideration of both logical causal relationships and computational efficiency. The knowledge encoding job reduces the size of knowledge base to reason over by transforming the input knowledge of XML/RDF form into one of more precise form. Repeat of the transitive reasoning job and the relation refining job usually consumes most of computational time and storage for the overall reasoning process. In order to improve the jobs, our reasoner finds out the minimal disjunctive relations for qualitative spatial reasoning, and then, based upon them, it not only reduces the composition table to be used for the transitive reasoning job, but also optimizes the relation refining job. Through experiments using a large-scale benchmarking spatial knowledge base, the proposed reasoner showed high performance and scalability.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on the Multi-Stage Inventory System - Especially with the Inventory Management of Fisheries Processing Industries- (다단계 재고시스템에 관한 연구 -수산물가공업의 재고관리를 중심으로-)

  • 이강우
    • The Journal of Fisheries Business Administration
    • /
    • v.21 no.2
    • /
    • pp.55-84
    • /
    • 1990
  • The objective of this study is to develop an inventory model for the inventory management of a stocking point which sells processed fisheries products. The study, first of all, sets up fisheries processing companies, food companies, apparel companies, pharmaceutical companies and electronic and electrical companies as a population. Then, a comparative study is empirically applied to obtain the inventory characteristics of final products by industry through a survey of a sample selected by a random sampling procedure. The major inventory characteristics of processed fisheries products obtained from the above analysis can be summarized as follows : 1) The major demand characteristics of processed fisheries products is to have wide seasonal fluctuations because the supply of raw materials (i.e., fisheries products) heavily depends on the productive capacity of nature. 2) It has found that fisheries processing companies are the worst in inventory management among the various industries selected in the sample. However, the self-rating of inventory management system by inventory managers of companies shows that the fisheries processing companies are relatively higher than the other companies. 3) The portion of inventory holding cost out of inventory relevant cost is very high for processed fisheries products compared with final products of the other industries. 4) Processed fisheries products are distributed to final consumers through roughly two distribution echelons and take a parallel type inventory system for their distribution structure. In order to develop an inventory model which reflects the inventory characteristics of processed fisheries products mentioned in the above, an inventory model with partial backorders is developed under the situation of stochastic lead time under the consideration of the inventory characteristics of processed fisheries products and then an iterative solution method is provided for the model. Then this study analyzes sensitivity for the standard deviation of lead time in the model by numerical examples.

  • PDF

Enhancing the performance of taxi application based on in-memory data grid technology (In-memory data grid 기술을 활용한 택시 애플리케이션 성능 향상 기법 연구)

  • Choi, Chi-Hwan;Kim, Jin-Hyuk;Park, Min-Kyu;Kwon, Kaaen;Jung, Seung-Hyun;Nazareno, Franco;Cho, Wan-Sup
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1035-1045
    • /
    • 2015
  • Recent studies in Big Data Analysis are showing promising results, utilizing the main memory for rapid data processing. In-memory computing technology can be highly advantageous when used with high-performing servers having tens of gigabytes of RAM with multi-core processors. The constraint in network in these infrastructure can be lessen by combining in-memory technology with distributed parallel processing. This paper discusses the research in the aforementioned concept applying to a test taxi hailing application without disregard to its underlying RDBMS structure. The application of IMDG technology in the application's backend API without restructuring the database schema yields 6 to 9 times increase in performance in data processing and throughput. Specifically, the change in throughput is very small even with increase in data load processing.