• Title/Summary/Keyword: massive parallel system

Search Result 43, Processing Time 0.025 seconds

Analysis of NEESgrid Computing and System for Korean Construction Test Equipments Infrastructure (NEESgrid 시스템의 구성과 기능별 역할 분석을 통한 우리나라 건설실험시설의 네트워크 시스템 구축)

  • Jeong, Tai Kyeong;Shim, Nak Hoon;Park, Young Suk
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4A
    • /
    • pp.689-692
    • /
    • 2006
  • This paper presents the developments of Grid computing architecture which is use many data and resources from distributed and parallel system for construction test equipments, i.e., large scale computer networks meant to provide access to massive computational facilities for very large communities of users, drawing upon experiences of existing Grids architecture. In this paper, we present an efficient way to construct a construction test equipment infrastructure.

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

Maximum Ratio Transmission for Space-Polarization Division Multiple Access in Dual-Polarized MIMO System

  • Hong, Jun-Ki;Jo, Han-Shin;Mun, Cheol;Yook, Jong-Gwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3054-3067
    • /
    • 2015
  • The phenomena of higher channel cross polarization discrimination (XPD) is mainly observed for future wireless technologies such as small cell network and massive multiple-input multiple-output (MIMO) system. Therefore, utilization of high XPD is very important and space-polarization division multiple access (SPDMA) with dual-polarized MIMO system could be a suitable solution to high-speed transmission in high XPD environment as well as reduction of array size at base station (BS). By SPDMA with dual-polarized MIMO system, two parallel data signals can be transmitted by both vertically and horizontally polarized antennas to serve different mobile stations (MSs) simultaneously compare to conventional space division multiple access (SDMA) with single-polarized MIMO system. This paper analyzes the performance of SPDMA for maximum ratio transmission (MRT) in time division duplexing (TDD) system by proposed dual-polarized MIMO spatial channel model (SCM) compare to conventional SDMA. Simulation results indicate that how SPDMA utilizes the high XPD as the number of MS increases and SPDMA performs very close to conventional SDMA for same number of antenna elements but half size of the array at BS.

Implementation of Viterbi Decoder on Massively Parallel GPU for DVB-T Receiver (DVB-T 수신기를 위한 대규모 병렬처리 GPU 기반의 비터비 복호기 구현)

  • Lee, KyuHyung;Lee, Ho-Kyoung;Heo, Seo Weon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.3-11
    • /
    • 2013
  • Recently, a plenty of researches have been conducted using the massively parallel processing of GPU for the implementation of communication system. In this paper, we tried to reduce software simulation time applying GPU with sliding block method to Viterbi decoder in DVB-T system which is one of European DTV standards. First of all, we implement DVB-T system by CPU and estimate cost time whereby the system processes one OFDM symbol. Secondly, we implement Viterbi decoder by software using NVIDIA's massive GPU processor. In our work, stream process method is applied to reduce the overhead for data transfer between CPU and GPU, as well as coalescing method to lower the global memory access time. In addition, data structure design method is used to maximize the shared memory usage. Consequently, our proposed method is approximately 11 times faster in 2K mode and 60 times faster in 8K mode for the process in Viterbi decoder.

The photometric and spectroscopic study of the near-contact binary XZ CMi

  • Kim, Hye-Young;Kim, Chun-Hwey;Hong, Kyeongsoo;Lee, Jae Woo;Park, Jang-Ho;Lee, Chung-Uk;Song, Mi-Hwa
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.60-60
    • /
    • 2018
  • It has been known that XZ CMi is a near-contact binary composed of a hotter and more massive main-sequence primary star close to its Roche-lobe and a Roche-lobe filling giant/subgiant secondary star. There still exist, however, many discordant matters among the previous investigators: diverse mass ratios and temperatures ranging from 0.38 to 0.83 and from 7,000 K to 8,876 K, respectively. In order to make a contribution to the two confusions we conducted spectroscopic and photometric observations. A total of 34 high-resolution spectra were obtained during 4 nights from 2010 and 2018 with the Bohyunsan Optical Echelle Spectrograph (BOES) at the Bohyunsan Optical Astronomy Observatory (BOAO). In parallel, BVRI multi-band photometric observations were carried out 5 nights in 2010 at Sobaeksan Optical Astronomy Observatory (SOAO). In this presentation, we present physical parameters of XZ CMi through the simultaneous analyses of new double-lined radial velocity curves and new light curves. We will also briefly discuss the evolutionary status of the system.

  • PDF

Acceleration of ECC Computation for Robust Massive Data Reception under GPU-based Embedded Systems (GPU 기반 임베디드 시스템에서 대용량 데이터의 안정적 수신을 위한 ECC 연산의 가속화)

  • Kwon, Jisu;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.956-962
    • /
    • 2020
  • Recently, as the size of data used in an embedded system increases, the need for an ECC decoding operation to robustly receive a massive data is emphasized. In this paper, we propose a method to accelerate the execution of computations that derive syndrome vectors when ECC decoding is performed using Hamming code in an embedded system with a built-in GPU. The proposed acceleration method uses the matrix-vector multiplication of the decoding operation using the CSR format, one of the data structures representing sparse matrix, and is performed in parallel in the CUDA kernel of the GPU. We evaluated the proposed method using a target embedded board with a GPU, and the result shows that the execution time is reduced when ECC decoding operation accelerated based on the GPU than used only CPU.

PARALLEL IMAGE RECONSTRUCTION FOR NEW VACUUM SOLAR TELESCOPE

  • Li, Xue-Bao;Wang, Feng;Xiang, Yong Yuan;Zheng, Yan Fang;Liu, Ying Bo;Deng, Hui;Ji, Kai Fan
    • Journal of The Korean Astronomical Society
    • /
    • v.47 no.2
    • /
    • pp.43-47
    • /
    • 2014
  • Many advanced ground-based solar telescopes improve the spatial resolution of observation images using an adaptive optics (AO) system. As any AO correction remains only partial, it is necessary to use post-processing image reconstruction techniques such as speckle masking or shift-and-add (SAA) to reconstruct a high-spatial-resolution image from atmospherically degraded solar images. In the New Vacuum Solar Telescope (NVST), the spatial resolution in solar images is improved by frame selection and SAA. In order to overcome the burden of massive speckle data processing, we investigate the possibility of using the speckle reconstruction program in a real-time application at the telescope site. The code has been written in the C programming language and optimized for parallel processing in a multi-processor environment. We analyze the scalability of the code to identify possible bottlenecks, and we conclude that the presented code is capable of being run in real-time reconstruction applications at NVST and future large aperture solar telescopes if care is taken that the multi-processor environment has low latencies between the computation nodes.

Real Time Distributed Parallel Processing to Visualize Noise Map with Big Sensor Data and GIS Data for Smart Cities (스마트시티의 빅 센서 데이터와 빅 GIS 데이터를 융합하여 실시간 온라인 소음지도로 시각화하기 위한 분산병렬처리 방법론)

  • Park, Jong-Won;Sim, Ye-Chan;Jung, Hae-Sun;Lee, Yong-Woo
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.1-6
    • /
    • 2018
  • In smart cities, data from various kinds of sensors are collected and processed to provide smart services to the citizens. Noise information services with noise maps using the collected sensor data from various kinds of ubiquitous sensor networks is one of them. This paper presents a research result which generates three dimensional (3D) noise maps in real-time for smart cities. To make a noise map, we have to converge many informal data which include big image data of geographical Information and massive sensor data. Making such a 3D noise map in real-time requires the processing of the stream data from the ubiquitous sensor networks in real-time and the convergence operation in real-time. They are very challenging works. We developed our own methodology for real-time distributed and parallel processing for it and present it in this paper. Further, we developed our own real-time 3D noise map generation system, with the methodology. The system uses open source softwares for it. Here in this paper, we do introduce one of our systems which uses Apache Storm. We did performance evaluation using the developed system. Cloud computing was used for the performance evaluation experiments. It was confirmed that our system was working properly with good performance and the system can produce the 3D noise maps in real-time. The performance evaluation results are given in this paper, as well.

A Study on the Data Classification in Engineering Stage of Pipeline Project in Extreme Cold Weather (극한지 파이프라인 프로젝트 설계단계에서의 데이터 분류에 관한 연구)

  • Kim, Chang-Han;Won, Seo-Kyung;Lee, Jun-Bok;Han, Choong-Hee
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2014.11a
    • /
    • pp.214-215
    • /
    • 2014
  • Recently, Russia decided to export an annual 7.5 million tons of natural gas to Korea over 30 years from 2015, as also deal with China, planed to build a pipeline connecting Siberia to Shandong Peninsula about 4000km. Risk management is required depending on the project in extreme cold weather, because it is concerned about the behavior of the seasonal changes in soil temperature and the strain of pipe according to the long-distance pipeline construction. The plan of data management shall be prepared in parallel for a sophisticated risk management, because a data is massive scale and it is generated/accumulated in real time. Therefore, this research is aimed to classify a data items in engineering stage of pipeline by previous studies for managing a generated data depending on the detail works in extreme cold weather. We expect to be provided the foundation of an efficient classification system of a generated data from the pipeline project life cycle.

  • PDF

Improvement and verification of the DeCART code for HTGR core physics analysis

  • Cho, Jin Young;Han, Tae Young;Park, Ho Jin;Hong, Ser Gi;Lee, Hyun Chul
    • Nuclear Engineering and Technology
    • /
    • v.51 no.1
    • /
    • pp.13-30
    • /
    • 2019
  • This paper presents the recent improvements in the DeCART code for HTGR analysis. A new 190-group DeCART cross-section library based on ENDF/B-VII.0 was generated using the KAERI library processing system for HTGR. Two methods for the eigen-mode adjoint flux calculation were implemented. An azimuthal angle discretization method based on the Gaussian quadrature was implemented to reduce the error from the azimuthal angle discretization. A two-level parallelization using MPI and OpenMP was adopted for massive parallel computations. A quadratic depletion solver was implemented to reduce the error involved in the Gd depletion. A module to generate equivalent group constants was implemented for the nodal codes. The capabilities of the DeCART code were improved for geometry handling including an approximate treatment of a cylindrical outer boundary, an explicit border model, the R-G-B checker-board model, and a super-cell model for a hexagonal geometry. The newly improved and implemented functionalities were verified against various numerical benchmarks such as OECD/MHTGR-350 benchmark phase III problems, two-dimensional high temperature gas cooled reactor benchmark problems derived from the MHTGR-350 reference design, and numerical benchmark problems based on the compact nuclear power source experiment by comparing the DeCART solutions with the Monte-Carlo reference solutions obtained using the McCARD code.