• Title/Summary/Keyword: massive parallel system

Search Result 43, Processing Time 0.022 seconds

Spiral Magnetic Field Lines in a Hub-Filament Structure, Monoceros R2

  • Hwang, Jihye;Kim, Jongsoo
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.59.3-60
    • /
    • 2020
  • We present the results of polarization observations at submillimeter wavelengths towards Monoceros R2 (Mon R2). The polarized thermal dust emission was obtained from SCUBA-2/POL-2 at 450 ㎛ and 850 ㎛, simultaneously. This observation is a part of the JCMT BISTRO survey project. The polarization angle distributions at 450 ㎛ and 850 ㎛ are similar and the mean value of angle differences at two wavelengths is 5.5 degrees. The Mon R2 is one of massive star-forming regions containing a clear hub-filamentary structure. The hub region shows star formation activities, and surrounding filaments provide channels for matters to move into the hub region. It is not well known the role of magnetic fields in a hub-filamentary structure. Some studies have shown well-ordered polarization segments along a filamentary structure and magnetic field morphology traced by polarization segments is interpreted as to help gas flow along the filamentary structrue. Our observations shows that filaments in Mon R2 have spiral structure and the magnetic field lines are parallel to the filaments. We interpret that the spiral structure can be formed by a rotation hub-filament system with gas flowing along the filaments to the hub. We found several dust clumps at the central part of the hub region of the Mon R2. They seems to be formed at locations where spiral field lines meet each other. These results show one observational example that a magnetic field play a role in gas flow.

  • PDF

Performance Analysis of Implementation on Image Processing Algorithm for Multi-Access Memory System Including 16 Processing Elements (16개의 처리기를 가진 다중접근기억장치를 위한 영상처리 알고리즘의 구현에 대한 성능평가)

  • Lee, You-Jin;Kim, Jea-Hee;Park, Jong-Won
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.3
    • /
    • pp.8-14
    • /
    • 2012
  • Improving the speed of image processing is in great demand according to spread of high quality visual media or massive image applications such as 3D TV or movies, AR(Augmented reality). SIMD computer attached to a host computer can accelerate various image processing and massive data operations. MAMS is a multi-access memory system which is, along with multiple processing elements(PEs), adequate for establishing a high performance pipelined SIMD machine. MAMS supports simultaneous access to pq data elements within a horizontal, a vertical, or a block subarray with a constant interval in an arbitrary position in an $M{\times}N$ array of data elements, where the number of memory modules(MMs), m, is a prime number greater than pq. MAMS-PP4 is the first realization of the MAMS architecture, which consists of four PEs in a single chip and five MMs. This paper presents implementation of image processing algorithms and performance analysis for MAMS-PP16 which consists of 16 PEs with 17 MMs in an extension or the prior work, MAMS-PP4. The newly designed MAMS-PP16 has a 64 bit instruction format and application specific instruction set. The author develops a simulator of the MAMS-PP16 system, which implemented algorithms can be executed on. Performance analysis has done with this simulator executing implemented algorithms of processing images. The result of performance analysis verifies consistent response of MAMS-PP16 through the pyramid operation in image processing algorithms comparing with a Pentium-based serial processor. Executing the pyramid operation in MAMS-PP16 results in consistent response of processing time while randomly response time in a serial processor.

A Character Analysis of the Woodland Edge in point of Landscape Ecology (수림가장자리의 경관생태적 특성분석)

  • Cho, Hyun-Ju;Ra, Jung-Hwa
    • Current Research on Agriculture and Life Sciences
    • /
    • v.25
    • /
    • pp.13-18
    • /
    • 2007
  • The aim of this research is to set improvement guidance a character analysis of woodland edge to cope with the ecological dysfunction of woodland which was caused by massive development project and thoughtless development in country areas. The summary of research result are as follows. 1) From the result of landscape ecology characteristic analysis of woodland in all seven research sites, to begin with, in proportion of appearance by vegetation layer and condition of composition, site 5 showed to be most satisfactory. 2) A width of woodland edge was revealed 7.5m as a minimum, 17.0m as a maximum, and 11.4m as a average and minimum edge was set as 10m according to integrated analysis on each example place. 3) As a result of flexibility analysis, site 1, 2 and 5 was shown high value 3, and it is thought that curve rather than linearity should be maintained in order to increase the ecological function. Also, a phenomenon of straight was prominent, and as a woodland edge, green network and buffering system showed to be somewhat unsatisfactory. 4) Based on the result of character analysis of landscape ecology, main guidelines for improvement of woodland edge were categorized into five in parallel structure and three in vertical structure respectively. The guidelines for improvement of woodland edge suggested by the research has a deep meaning in that it is used as a basic material to induce for controling more systematically or landscape-friendly the defamed forest problems caused by road construction, various development projects, and enlargement of agricultural lands.

  • PDF

Implementation of Real-time Data Stream Processing for Predictive Maintenance of Offshore Plants (해양플랜트의 예지보전을 위한 실시간 데이터 스트림 처리 구현)

  • Kim, Sung-Soo;Won, Jongho
    • Journal of KIISE
    • /
    • v.42 no.7
    • /
    • pp.840-845
    • /
    • 2015
  • In recent years, Big Data has been a topic of great interest for the production and operation work of offshore plants as well as for enterprise resource planning. The ability to predict future equipment performance based on historical results can be useful to shuttling assets to more productive areas. Specifically, a centrifugal compressor is one of the major piece of equipment in offshore plants. This machinery is very dangerous because it can explode due to failure, so it is necessary to monitor its performance in real time. In this paper, we present stream data processing architecture that can be used to compute the performance of the centrifugal compressor. Our system consists of two major components: a virtual tag stream generator and a real-time data stream manager. In order to provide scalability for our system, we exploit a parallel programming approach to use multi-core CPUs to process the massive amount of stream data. In addition, we provide experimental evidence that demonstrates improvements in the stream data processing for the centrifugal compressor.

Gene Expression Analysis of Rat Liver Epithelial Cells in Response to Thioacetamide

  • Park, Joon-Suk;Yeom, Hye-Jung;Jung, Jin-Wook;Hwang, Seung-Yong;Lee, Yong-Soon;Kang, Kyung-Sun
    • Molecular & Cellular Toxicology
    • /
    • v.1 no.3
    • /
    • pp.203-208
    • /
    • 2005
  • Thioacetamide (TA) is potent haptotoxincant that requires metabolic activation by mixed-function oxidases. Micrcarray technology, which is massive parallel gene expression profiling in a single hybridization experiment, has provided as a powerful molecular genetic tool for biological system related toxicant. In this study we focus on the use of toxicogenomics for the determination of gene expression analysis associated with hepatotoxicity in rat liver epithelial cell line WB-F344 (WB). The WB cells was used to assess the toxic effects of TA. WB cells were exposed to two concentrations of TA-doses which caused 20% and 50% cell death were chosen and the cells exposed for periods of 2 and 24 h. Our data revealed that following the 2-h exposure at the both of doses and 24-h exposure at the low doses, few changes in gene expression were detected. However, after 24-h exposure of the cells to the high concentration, multiple changes in gene expression were observed. TA treatment gave rise predominantly to up-regulation of genes involved in cell cycle and cell death, but down-regulation of genes involves in cell adhesion and calcium ion binding. Exposure of WB cells to higher doses of the TA gave rise to more changes in gene expression at lower exposure times. These results show that TA regulates expression of numerous genes via direct molecular signaling mechanisms in liver cells.

Development of High-yielding Mutants of Streptomyces avermitilis for Avermectin B_{1a} Production through Protoplast Fusion. (원형질체 융합에 의한 Avermectina B_{1a} 고생산성 Streptomyces avermitilis 균주 개발)

  • 김경희;송성기;정연호;정용섭;전계택
    • Microbiology and Biotechnology Letters
    • /
    • v.32 no.2
    • /
    • pp.101-109
    • /
    • 2004
  • In order to enhance the productivity of AVM $B_{la}$ produced by Streptomyces avermitilis as a secondary metabolite, we established a basic protocol necessary for protoplast fusion with high-producing strains as a fusion partner, and then obtained various kinds offusants by adopting a massive strain-development procedure (a miniaturized strain screening system). An alternative fusion method using UV and/or NTG mutation of protoplasts was developed to screen genetic recombinants without specific selectable markers. In this method, the mutants obtained by protoplast fusion after UV and/or NTG treatment (95% death rate) of the respective fusion partner (protoplasts of the respective mutants resistant against L-isoleucine antimetabolites such as O-methylthreonine and/or azaleucine) were regarded as DNA-recombined protoplast fusants. Notably it was demonstrated that most of the protoplast recombinants obtained by the UV mutation method were able to biosynthesize higher amount of AVM $B_{la}$ , reaching almost three times higher level (almost equal to the industrial productivity), compared to the average AVM Bla amount of the parallel mother strains.

Relationships Between the Characteristics of the Business Data Set and Forecasting Accuracy of Prediction models (시계열 데이터의 성격과 예측 모델의 예측력에 관한 연구)

  • 이원하;최종욱
    • Journal of Intelligence and Information Systems
    • /
    • v.4 no.1
    • /
    • pp.133-147
    • /
    • 1998
  • Recently, many researchers have been involved in finding deterministic equations which can accurately predict future event, based on chaotic theory, or fractal theory. The theory says that some events which seem very random but internally deterministic can be accurately predicted by fractal equations. In contrast to the conventional methods, such as AR model, MA, model, or ARIMA model, the fractal equation attempts to discover a deterministic order inherent in time series data set. In discovering deterministic order, researchers have found that neural networks are much more effective than the conventional statistical models. Even though prediction accuracy of the network can be different depending on the topological structure and modification of the algorithms, many researchers asserted that the neural network systems outperforms other systems, because of non-linear behaviour of the network models, mechanisms of massive parallel processing, generalization capability based on adaptive learning. However, recent survey shows that prediction accuracy of the forecasting models can be determined by the model structure and data structures. In the experiments based on actual economic data sets, it was found that the prediction accuracy of the neural network model is similar to the performance level of the conventional forecasting model. Especially, for the data set which is deterministically chaotic, the AR model, a conventional statistical model, was not significantly different from the MLP model, a neural network model. This result shows that the forecasting model. This result shows that the forecasting model a, pp.opriate to a prediction task should be selected based on characteristics of the time series data set. Analysis of the characteristics of the data set was performed by fractal analysis, measurement of Hurst index, and measurement of Lyapunov exponents. As a conclusion, a significant difference was not found in forecasting future events for the time series data which is deterministically chaotic, between a conventional forecasting model and a typical neural network model.

  • PDF

Distributed AI Learning-based Proof-of-Work Consensus Algorithm (분산 인공지능 학습 기반 작업증명 합의알고리즘)

  • Won-Boo Chae;Jong-Sou Park
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.1-14
    • /
    • 2022
  • The proof-of-work consensus algorithm used by most blockchains is causing a massive waste of computing resources in the form of mining. A useful proof-of-work consensus algorithm has been studied to reduce the waste of computing resources in proof-of-work, but there are still resource waste and mining centralization problems when creating blocks. In this paper, the problem of resource waste in block generation was solved by replacing the relatively inefficient computation process for block generation with distributed artificial intelligence model learning. In addition, by providing fair rewards to nodes participating in the learning process, nodes with weak computing power were motivated to participate, and performance similar to the existing centralized AI learning method was maintained. To show the validity of the proposed methodology, we implemented a blockchain network capable of distributed AI learning and experimented with reward distribution through resource verification, and compared the results of the existing centralized learning method and the blockchain distributed AI learning method. In addition, as a future study, the thesis was concluded by suggesting problems and development directions that may occur when expanding the blockchain main network and artificial intelligence model.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Performance Analysis on Declustering High-Dimensional Data by GRID Partitioning (그리드 분할에 의한 다차원 데이터 디클러스터링 성능 분석)

  • Kim, Hak-Cheol;Kim, Tae-Wan;Li, Ki-Joune
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1011-1020
    • /
    • 2004
  • A lot of work has been done to improve the I/O performance of such a system that store and manage a massive amount of data by distributing them across multiple disks and access them in parallel. Most of the previous work has focused on an efficient mapping from a grid ceil, which is determined bY the interval number of each dimension, to a disk number on the assumption that each dimension is split into disjoint intervals such that entire data space is GRID-like partitioned. However, they have ignored the effects of a GRID partitioning scheme on declustering performance. In this paper, we enhance the performance of mapping function based declustering algorithms by applying a good GRID par-titioning method. For this, we propose an estimation model to count the number of grid cells intersected by a range query and apply a GRID partitioning scheme which minimizes query result size among the possible schemes. While it is common to do binary partition for high-dimensional data, we choose less number of dimensions than needed for binary partition and split several times along that dimensions so that we can reduce the number of grid cells touched by a query. Several experimental results show that the proposed estimation model gives accuracy within 0.5% error ratio regardless of query size and dimension. We can also improve the performance of declustering algorithm based on mapping function, called Kronecker Sequence, which has been known to be the best among the mapping functions for high-dimensional data, up to 23 times by applying an efficient GRID partitioning scheme.