• Title/Summary/Keyword: Parallel data processing

Search Result 751, Processing Time 0.03 seconds

Parallel Data Extraction Architecture for High-speed Playback of High-density Optical Disc (고용량 광 디스크의 고속 재생을 위한 병렬 데이터 추출구조)

  • Choi, Goang-Seog
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.3
    • /
    • pp.329-334
    • /
    • 2009
  • When an optical disc is being played. the pick-up converts light to analog signal at first. The analog signal is equalized for removing the inter-symbol interference and then the equalized analog signal is converted into the digital signal for extracting the synchronized data and clock signals. There are a lot of algorithms that minimize the BER in extracting the synchronized data and clock when high. density optical disc like BD is being played in low speed. But if the high-density optical disc is played in high speed, it is difficult to adopt the same extraction algorithm to data PLL and PRML architecture used in low speed application. It is because the signal with more than 800MHz should be processed in those architectures. Generally, in the 0.13-${\mu}m$ CMOS technology, it is necessary to have the high speed analog cores and lots of efforts to layout. In this paper, the parallel data PLL and PRML architecture, which enable to process in BD 8x speed of the maximum speed of the high-density optical disc as the extracting data and clock circuit, is proposed. Test results show that the proposed architecture is well operated without processing error at BD 8x speed.

  • PDF

Deployment and Performance Analysis of Data Transfer Node Cluster for HPC Environment (HPC 환경을 위한 데이터 전송 노드 클러스터 구축 및 성능분석)

  • Hong, Wontaek;An, Dosik;Lee, Jaekook;Moon, Jeonghoon;Seok, Woojin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.197-206
    • /
    • 2020
  • Collaborative research in science applications based on HPC service needs rapid transfers of massive data between research colleagues over wide area network. With regard to this requirement, researches on enhancing data transfer performance between major superfacilities in the U.S. have been conducted recently. In this paper, we deploy multiple data transfer nodes(DTNs) over high-speed science networks in order to move rapidly large amounts of data in the parallel filesystem of KISTI's Nurion supercomputer, and perform transfer experiments between endpoints with approximately 130ms round trip time. We have shown the results of transfer throughput in different size file sets and compared them. In addition, it has been confirmed that the DTN cluster with three nodes can provide about 1.8 and 2.7 times higher transfer throughput than a single node in two types of concurrency and parallelism settings.

Overlapping Effects of Circular Shift Communication and Computation (원형 쉬프트 통신의 중첩 효과 분석)

  • Kim, Jung-Hwan;Rho, Jung-Kyu;Song, Ha-Yoon
    • The KIPS Transactions:PartA
    • /
    • v.9A no.2
    • /
    • pp.197-206
    • /
    • 2002
  • Many researchers have been interested in the optimization of parallel programs through the latency hiding by overlapping the communication with the computation. We ana1yzed overlapping effects in the circular shift communication which is one of the collective communications being frequently used In many data parallel programs. We measured the time which can be possibly overlapped and the time which cannot be overlapped in over all circular shift communication period on an Ethernet switch-based clustered system. The result from each platform nay be used for the input of optimizing compilers. The previous performance models usually have two kinds of drawbacks one is only based on point-to-point communication, so it is not appropriate for analyzing the overall effects of collective communications. The other provides the performance of collective communication, but no overlapping effect. In this paper we extended the previous models and analyzed the experimental results of the extended model.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

A study on the process of mapping data and conversion software using PC-clustering (PC-clustering을 이용한 매핑자료처리 및 변환소프트웨어에 관한 연구)

  • WhanBo, Taeg-Keun;Lee, Byung-Wook;Park, Hong-Gi
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.7 no.2 s.14
    • /
    • pp.123-132
    • /
    • 1999
  • With the rapid increases of the amount of data and computing, the parallelization of the computing algorithm becomes necessary more than ever. However the parallelization had been conducted mostly in a super-computer until the rod 1990s, it was not for the general users due to the high price, the complexity of usage, and etc. A new concept for the parallel processing has been emerged in the form of K-clustering form the late 1990s, it becomes an excellent alternative for the applications need high computer power with a relative low cost although the installation and the usage are still difficult to the general users. The mapping algorithms (cut, join, resizing, warping, conversion from raster to vector and vice versa, etc) in GIS are well suited for the parallelization due to the characteristics of the data structure. If those algorithms are manipulated using PC-clustering, the result will be satisfiable in terms of cost and performance since they are processed in real flu with a low cos4 In this paper the tools and the libraries for the parallel processing and PC-clustering we introduced and how those tools and libraries are applied to mapping algorithms in GIS are showed. Parallel programs are developed for the mapping algorithms and the result of the experiments shows that the performance in most algorithms increases almost linearly according to the number of node.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Fast Double Random Phase Encoding by Using Graphics Processing Unit (GPU 컴퓨팅에 의한 고속 Double Random Phase Encoding)

  • Saifullah, Saifullah;Moon, In-Kyu
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2012.05a
    • /
    • pp.343-344
    • /
    • 2012
  • With the increase of sensitive data and their secure transmission and storage, the use of encryption techniques has become widespread. The performance of encoding majorly depends on the computational time, so a system with less computational time suits more appropriate as compared to its contrary part. Double Random Phase Encoding (DRPE) is an algorithm with many sub functions which consumes more time when executed serially; the computation time can be significantly reduced by implementing important functions in a parallel fashion on Graphics Processing Unit (GPU). Computing convolution using Fast Fourier transform in DRPE is the most important part of the algorithm and it is shown in the paper that by performing this portion in GPU reduced the execution time of the process by substantial amount and can be compared with MATALB for performance analysis. NVIDIA graphic card GeForce 310 is used with CUDA C as a programming language.

  • PDF

The Design and Implementation of VIA-based Cluster System for spatial data's parallelism (공간 데이터의 병렬성을 고려한 VIA 기반의 클러스트 시스템 설계 및 구현)

  • Park, Si-Yong;Park, Sung-Ho;Chung, Ki-Dong
    • Annual Conference of KIPS
    • /
    • 2000.10a
    • /
    • pp.653-656
    • /
    • 2000
  • 본 논문에서는 공간데이터의 병렬성을 고려한 클러스트 시스템을 제안하였다. 클러스트 시스템의 큰 단점인 다단계 프로토콜 스택에서 오는 메시지 전송 부하를 줄인 VIA(Virtual Interface Architecture)를 기반으로 클러스트 시스템을 구성하고 저장 서버들간에는 공간데이터의 지역성에 기반하여 데이터를 배치하며 저장 서버들 내에서는 공간 데이터의 병렬성을 고려하여 EPR(Enhanced Parallel R-tree)로 데이터를 배치하였다. 위의 클러스트 시스템을 기반으로 적절한 전송 데이터 크기와 전송 횟수를 구하기 위한 실험을 실시하였다.

  • PDF

KOMPSAT EOC Grid Reference System

  • Kim, Youn-Soo;Kim, Yong-Seung;Benton, William
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.349-354
    • /
    • 1998
  • The grid reference system (GRS) has been useful for identifying the geographical location of satellite images. In this study we derive a GRS for the KOMPSAT Electro-Optical Camera (EOC) images. The derivation substantially follows the way that SPOT defines for its GRS, but incorporates the KOMPSAT orbital characteristics. The KOMPSAT EOC GRS (KEGRS) is designed to be a (K,J) coordinate system. The K coordinate parallel to the KOMPSAT ground track denotes the relative longitudinal position and the J coordinate represents the relative latitudinal position. The numbering of K begins with the prime meridian of K=1 with K increasing eastward, and the numbering of J uses a fixed value of J=500 at all center points on the equator with J increasing northward. The lateral and vertical intervals of grids are determined to be 12.5 km about at the 38$^{\circ}$ latitude to allow some margins for the value-added processing. The above design factors are being implemented in a satellite programming module of the KOMPSAT Receiving and Processing System (KRPS) to facilitate the EOC data collection planning over the Korean peninsula.

  • PDF

An Analysis on Signal to Interchannel Interference Ratio of MC-CDMA System in Time Selective Fading Environments (시간선택적 페이딩 환경에서 MC-CDMA 시스템의 신호대 채널간 간섭의 비에 대한 분석)

  • 김명진;김성필;오종갑
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2001.06a
    • /
    • pp.33-36
    • /
    • 2001
  • In MC-CDMA systems effects of delay spread of the channel are reduced with increased symbol duration by simultaneously transmitting data symbols on the parallel subcarriers. However, the increased symbol duration causes the system to be more vulnerable to time selective fading. In this paper, we investigate the effects of time selective fading characteristics of the mobile channel from the viewpoint of desired signal power to inter-carrier interference power ratio at the combiner output of the MC-CDMA receiver.

  • PDF