• Title/Summary/Keyword: and Parallel Processing

Search Result 2,013, Processing Time 0.032 seconds

Solving the Monkey and Banana Problem Using DNA Computing (DNA 컴퓨팅을 이용한 원숭이와 바나나 문제 해결)

  • 박의준;이인희;장병탁
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.2
    • /
    • pp.15-25
    • /
    • 2003
  • The Monkey and Banana Problem is an example commonly used for illustrating simple problem solving. It can be solved by conventional approaches, but this requires a procedural aspect when inferences are processed, and this fact works as a limitation condition in solving complex problems. However, if we use DNA computing methods which are naturally able to realize massive parallel processing. the Monkey and Banana Problem can be solved effectively without weakening the fundamental aims above. In this paper, we design a method of representing the problem using DNA molecules, and show that various solutions are generated through computer-simulations based on the design. The simulation results are obviously interesting in that these are contrary to the fact that the Prolog program for the Monkey and Banana Problem, which was implemented from the conventional point of view, gives us only one optimal solution. That is, DNA computing overcomes the limitations of conventional approaches.

  • PDF

GPU-accelerated Lattice Boltzmann Simulation for the Prediction of Oil Slick Movement in Ocean Environment (GPU 가속 기술을 이용한 격자 볼츠만법 기반 원유 확산 과정 시뮬레이션)

  • Ha, Sol;Ku, Namkug;Roh, Myung-Il
    • Korean Journal of Computational Design and Engineering
    • /
    • v.18 no.6
    • /
    • pp.399-406
    • /
    • 2013
  • This paper describes a new simulation technique for advection-diffusion phenomena over the sea surface using the lattice Boltzmann method (LBM), capable of predicting oil dispersion from tankers. The LBM is used to solve the pollutant transport problem within the framework of the ocean environment. The sea space is represented by the lattices, where each lattice has the information on oil transportation. Since dispersed oils (i.e., oil droplets) at sea are transported by convection due to waves, buoyancy, and turbulent diffusion, the conservation of mass and many physical oil transport rules were used in the prediction model. Since the LBM is modeled using the uniform lattices and simple rules, it can be easily accelerated by the parallel mechanism, for example, GPU-accelerated method. The proposed model using the LBM is used to simulate a simple pollution event with the oil pollutants of 10,000 kL. The simulation results indicate that the LBM method accelerated with the GPU is 6 times faster than that without the GPU.

Incorporating Resource Dynamics to Determine Generation Adequacy Levels in Restructured Bulk Power Systems

  • Felder, Frank A.
    • KIEE International Transactions on Power Engineering
    • /
    • v.4A no.2
    • /
    • pp.100-105
    • /
    • 2004
  • Installed capacity markets in the northeast of the United States ensure that adequate generation exists to satisfy regional loss of load probability (LOLP) criterion. LOLP studies are conducted to determine the amount of capacity that is needed, but they do not consider several factors that substantially affect the calculated distribution of available capacity. These studies do not account for the fact that generation availability increases during periods of high demand and therefore prices, common-cause failures that result in multiple generation units being unavailable at the same time, and the negative correlation between load and available capacity due to temperature and humidity. A categorization of incidents in an existing bulk power reliability database is proposed to analyze the existence and frequency of independent failures and those associated with resource dynamics. Findings are augmented with other empirical findings. Monte Carlo methods are proposed to model these resource dynamics. Using the IEEE Reliability Test System as a single-bus case study, the LOLP results change substantially when these factors are considered. Better data collection is necessary to support the more comprehensive modeling of resource adequacy that is proposed. In addition, a parallel processing method is used to offset the increase in computational times required to model these dynamics.

Design and Implementation of Precision Time Synchronization in Wireless Networks Using ZigBee (ZigBee를 이용한 무선 네트워크 환경에서의 정밀 시각 동기 기법 설계 및 구현)

  • Cho, Hyun-Tae;Son, Sang-Hyun;Baek, Yun-Ju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.5A
    • /
    • pp.561-570
    • /
    • 2008
  • Time synchronization is essential for a number of network applications such as high speed communication and parallel/distribution processing systems. As the era of ubiquitous computing is ushered in, the high precise time synchronization in wireless networks have been required in. This paper presents the design ana the implementation of the high precision time synchronization in wireless networks using ZigBee. To achieve high precision requirements, we have tried to analyze and reduce error factors such as the latency and jitters of a protocol stack on wireless environments. In addition, this paper includes some experiments and performance evaluations of our system. The result is that we established for nodes in a network to maintain their elects to within a 50 nanosecond offset from the reference clock.

Intelligent Control of Multivariable Process Using Immune Network System

  • Kim, Dong-Hwa
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2126-2128
    • /
    • 2001
  • This paper suggests that the immune network algorithm based on fuzzy set can effectively be used in tuning of a PID controller for multivariable process or nonlinear process. The artificial immune network always has a new parallel decentralized processing mechanism for various situations, since antibodies communicate to each other among different species of antibodies/B-cells through the stimulation and suppression chains among antibodies that from a large-scaled network. In addition to that, the structure of the network is not fixed, but varies continuously. On the other hand, a number of tuning technologies have been considered for the tuning of a PID controller. As a less common method, the fuzzy and neural network or its combined techniques are applied. However, in the case of the latter, yet, it is not applied in the practical field, in the former, a higher experience and technology is required during tuning procedure. Along with these, this paper used the fuzzy set in order that the stimulation and suppression relationship between antibody and antigen can be more adaptable controlled against the external condition, including noise or disturbance of plant. The immune network based on fuzzy set suggested here is applied for the PID controller tuning of multivariable process with two inputs and one output and is simulated.

  • PDF

Design of Sequential System Controller Using Incidence Matrix (접속 행렬을 이용한 순차 시스템 제어기 설계)

  • 전호익;류창근;우광준
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.12 no.1
    • /
    • pp.85-92
    • /
    • 1998
  • In this paper, we design a sequential system controller, which is capable of processing parallel sequence, on the basis of analysis of control specification described by Petri Net with incidence matrix. The sequential system controller consists of input conditioning unit and petri net control unit which is composed of the token control unit and firing unit. The firing unit determines the firing condition of the transfer signal on the basis of the token status of token control unit. By the proposed scheme, we can easily develop and implement the sequential system controller of automated warehousing system, automated transportation system, elevator system, and so on, as it is possible to modify control specification by changing simply the content of incidence matrix ROM and to expand easily functional capacity as the result of modular design.design.

  • PDF

Flash Translation Layer for the Multi-channel and Multi-way Solid State Disk (다중-채널 및 다중-웨이반도체 디스크를 위한 플래시 변환 계층)

  • Park, Hyun-Chul;Shin, Dong-Kun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.685-689
    • /
    • 2009
  • Flash memory has several features such as low~power consumption and fast access so that there has been various research on using flash memory as new storage. Especially the Solid State Disk which is composed of flash memory chips has recently replaced the hard disk. At present, SSD adopts the multi-channel and multi-way architecture to exploit advantages of parallel access. In this architecture, data are written on SSD in a unit of a superblock which is composed of multiple blocks in which some blocks are put together. This paper proposes two schemes of selecting, segmenting and re-composing victim superblocks to optimize concurrent processing when a buffer flush occurs. The experimental results show that 35% of superblock- based write operations is reduced by selecting victims and additional 9% by composition of superblock.

Implementation and Performance Analysis of PC Clusters using Fast PCs& High Speed Network (초고속 네트워크를 이용한 PC 클러스터의 구현과 성능 평가)

  • Kim, Young-Tae;Lee, Yonh-Hee;Choi, Jun-Tae;Oh, Jai-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.2
    • /
    • pp.57-64
    • /
    • 2002
  • We implemented two fast PC clusters using fast PCs and high speed network. First. we built the first generation of 16 PC cluster and have used it for real-time operation at Cheju Regional Meteorological Office. Next, we built the second generation of 16PC with dual CUs cluster which was efficiently improved based on performance analysis of the first generation of cluster. In this research we also analyzed performance of two different clusters, which have different CPUs and communication devices using the parallel model MM5 which has been used for the real-time weather forecasting.

Force monitoring of steel cables using vision-based sensing technology: methodology and experimental verification

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • v.18 no.3
    • /
    • pp.585-599
    • /
    • 2016
  • Steel cables serve as the key structural components in long-span bridges, and the force state of the steel cable is deemed to be one of the most important determinant factors representing the safety condition of bridge structures. The disadvantages of traditional cable force measurement methods have been envisaged and development of an effective alternative is still desired. In the last decade, the vision-based sensing technology has been rapidly developed and broadly applied in the field of structural health monitoring (SHM). With the aid of vision-based multi-point structural displacement measurement method, monitoring of the tensile force of the steel cable can be realized. In this paper, a novel cable force monitoring system integrated with a multi-point pattern matching algorithm is developed. The feasibility and accuracy of the developed vision-based force monitoring system has been validated by conducting the uniaxial tensile tests of steel bars, steel wire ropes, and parallel strand cables on a universal testing machine (UTM) as well as a series of moving loading experiments on a scale arch bridge model. The comparative study of the experimental outcomes indicates that the results obtained by the vision-based system are consistent with those measured by the traditional method for cable force measurement.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.