• Title/Summary/Keyword: Massively Parallel Processors(MPP)

Search Result 4, Processing Time 0.018 seconds

Development of High Performance Massively Parallel Processing Simulator for Semiconductor Etching Process (건식 식각 공정을 위한 초고속 병렬 연산 시뮬레이터 개발)

  • Lee, Jae-Hee;Kwon, Oh-Seob;Ban, Yong-Chan;Won, Tae-Young
    • Journal of the Korean Institute of Telematics and Electronics D
    • /
    • v.36D no.10
    • /
    • pp.37-44
    • /
    • 1999
  • This paper report the implementation results of Monte Carlo numerical calculation for ion distributions in plasma dry etching chamber and of the surface evolution simulator using cell removal method for topographical evolution of the surface exposed to etching ion. The energy and angular distributions of ion across the plasma sheath were calculated by MC(Monte Carlo) algorithm. High performance MPP(Massively Parallel Processing) algorithm developed in this paper enables efficient parallel and distributed simulation with an efficiency of more than 95% and speedup of 16 with 16 processors. Parallelization of surface evolution simulator based on cell removal method reduces simulation time dramatically to 15 minutes and increases capability of simulation required enormous memory size of 600Mb.

  • PDF

Prestack Reverse Time Depth Migration Using Monochromatic One-way Wave Equation (단일 주파수 일방향 파동방정식을 이용한 중합 전 역 시간 심도 구조보정)

  • Yoon Kwang Jin;Jang Mi Kyung;Suh Jung Hee;Shin Chang Soo;Yang Sung Jin;Ko Seung Won;Yoo Hae Soo;Jang Jae Kyung
    • Geophysics and Geophysical Exploration
    • /
    • v.3 no.2
    • /
    • pp.70-75
    • /
    • 2000
  • In the seismic migration, Kirchhoff and reverse time migration are used in general. In the reverse time migration using wave equation, two-way and one-way wave equation are applied. The approach of one-way wave equation uses approximately computed downward continuation extrapolator, it need tess amounts of calculations and core memory in compared to that of two-way wave equation. In this paper, we applied one-way wave equation to pre-stack reverse time migration. In the frequency-space domain, forward propagation of source wavefield and back propagration of measured wavefield were executed by using monochromatic one-way wave equation, and zero-lag cross correlation of two wavefield resulted in the image of subsurface. We had implemented prestack migration on a massively parallel processors (MPP) CRAYT3E, and knew the algorithm studied here is efficiently applied to the prestck migration due to its suitability for parallelization.

  • PDF

Comparative and Combined Performance Studies of OpenMP and MPI Codes (OpenMP와 MPI 코드의 상대적, 혼합적 성능 고찰)

  • Lee Myung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.13A no.2 s.99
    • /
    • pp.157-162
    • /
    • 2006
  • Recent High Performance Computing (HPC) platforms can be classified as Shared-Memory Multiprocessors (SMP), Massively Parallel Processors (MPP), and Clusters of computing nodes. These platforms are deployed in many scientific and engineering applications which require very high demand on computing power. In order to realize an optimal performance for these applications, it is crucial to find and use the suitable computing platforms and programming paradigms. In this paper, we use SPEC HPC 2002 benchmark suite developed in various parallel programming models (MPI, OpenMP, and hybrid of MPI/OpenMP) to find an optimal computing environments and programming paradigms for them through their performance analyses.

Parallelization of sheet forming analysis program using MPI (MPI를 이용한 판재성형해석 프로그램의 병렬화)

  • Kim, Eui-Joong;Suh, Yeong-Sung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.22 no.1
    • /
    • pp.132-141
    • /
    • 1998
  • A parallel version of sheet forming analysis program was developed. This version is compatible with any parallel computers which support MPI that is one of the most recent and popular message passing libraries. For this purpose, SERI-SFA, a vector version which runs on Cray Y-MP C90, a sequential vector computer, was used as a source code. For the sake of the effectiveness of the work, the parallelization was focused on the selected part after checking the rank of CPU consumed from the exemplary calculation on Cray Y-MP C90. The subroutines associated with contact algorithm was selected as targe parts. For this work, MPI was used as a message passing library. For the performance verification, an oil pan and an S-rail forming simulation were carried out. The performance check was carried out by the kernel and total CPU time along with theoretical performance using Amdahl's Law. The results showed some performance improvement within the limit of the selective paralellization.