DOI QR코드

DOI QR Code

MPI 일방향통신을 이용한 축류 팬 주위 소음해석 병렬프로그램 최적화

Optimization of Parallel Code for Noise Prediction in an Axial Fan Using MPI One-Sided Communication

  • 권오경 (한국과학기술정보연구원) ;
  • 박근태 (서울대학교 기계항공공학부) ;
  • 최해천 (서울대학교 기계항공공학부)
  • 투고 : 2018.01.15
  • 심사 : 2018.01.21
  • 발행 : 2018.03.31

초록

축류 팬(axial fan)은 팬이 회전하면서 작은 압력 상승을 만들어 다량의 공기를 불어주는 유체 기계로써 최근 축류 팬의 소음 저감이 중요하게 인식되고 있다. 본 연구는 팬 주위의 유동 소음을 해석하는 MPI 병렬프로그램 방법 및 최적화 기법에 대해 다룬다. 이때 수억 개 이상의 격자에서 수만 포인트의 소음원을 해석하기 위해서 2차원 도메인 분할 방법을 사용해서 MPI 병렬화를 하였다. 이때 대규모 계산 시 MPI 프로세스 간의 통신이 많이 발생하여 성능이 심각하게 느려지는 현상이 발생한다. 이를 극복하기 위해 MPI 일방향 통신을 적용하였다. 뿐만 아니라 통신 및 메모리 최적화 방법을 통해 최대 2.97배 향상시켰다. 마지막으로 KISTI 타키온2 슈퍼컴퓨터를 활용하여 전체 시뮬레이션 실험에서 유동 계산 시 6,144코어에서 최대 12배, 소음 계산 시 128코어에서 최대 6배의 성능향상을 달성하였다.

Recently, noise reduction in an axial fan producing the small pressure rise and large flow rate, which is one type of turbomachine, is recognized as essential. This study describes the design and optimization techniques of MPI parallel program to simulate the flow-induced noise in the axial fan. In order to simulate the code using 100 million number of grids for flow and 70,000 points for noise sources, we parallelize it using the 2D domain decomposition. However, when it is involved many computing cores, it is getting slower because of MPI communication overhead among nodes, especially for the noise simulation. Thus, it is adopted the one-sided communication to reduce the overhead of MPI communication. Moreover, the allocated memory and communication between cores are optimized, thereby improving 2.97x compared to the original one. Finally, it is achieved 12x and 6x faster using 6,144 and 128 computing cores of KISTI Tachyon2 than using 256 and 16 computing cores for the flow and noise simulations, respectively.

키워드

참고문헌

  1. W. Jiang, J. Liu, H. W. Jin, D. K. Panda, W. Gropp, and R. Thakur, "High performance MPI-2 one-sided communication over InfiniBand," In 2004 IEEE International Symposium on Cluster Computing and the Grid, CCGrid, pp.531-538, 2004.
  2. A. Pogorelov, M. Meinke, and W. Schroder, "Cut-cell method based large-eddy simulation of tip-leakage flow," Physics of Fluids, Vol.27, No.7, 075106, 2015.
  3. K. Park, H. Choi, S. Choi, Y. Sa, and O.-K. Kwon, "Unsteady characteristics of tip-leakage flow in an axial flow fan," in Proceedings of the 10th International Symposium on Turbulence and Shear Flow Phenomena (TSFP10), Chicago, USA, 2017.
  4. S. Laizet, E., Lamballais, and J. C. Vassilicos, "A 2D domain decomposition, a customized immersed boundary method and a zest of numerical dissipation: a successful cocktail to tackle turbulence on HPC systems," APS Division of Fluid Dynamics (Fall), 2015.
  5. F. M. White, "The differential equation of linear momentum," in Fluid Mechanics, 6th ed., McGraw-Hill Boston, p.238, 2008.
  6. K. Akselvoll and P. Moin, "An efficient method for temporal integration of the Navier-Stokes equations in confined axisymmetric geometries," Journal of Computational Physics, Vol.125, No.2, pp.454-463, 1996. https://doi.org/10.1006/jcph.1996.0107
  7. L. H. Thomas, "Elliptic Problems in Linear Differential Equations over a Network," Watson Sci. Comput. Lab Report, Columbia University, New York, 1949.
  8. P. Moin, "Discrete Transform Methods," in Fundamentals of Engineering Numerical Analysis, Cambridge University Press, p.159, 2001.
  9. J. E. Ffowcs Williams and D. L. Hawkings, "Sound Generation by Turbulence and Surfaces in Arbitrary Motion," Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol.264, No.1151, pp. 321-342, 1969. https://doi.org/10.1098/rsta.1969.0031
  10. Woojin Kim, Injae Lee, and Haecheon Choi, "A weak-coupling immersed boundary method for fluid-structure interaction with low density ratio of solid to fluid," Journal of Computational Physics, Vol.359, pp.296-311, 2018. https://doi.org/10.1016/j.jcp.2017.12.045
  11. MPI: A Message-Passing Interface Standard Version 2.2, http://mpi-forum.org/docs/mpi-2.2/mpi22-report.pdf, 2009.
  12. T. Hoefler, P. Gottschling, and A. Lumsdaine. Brief announcement: Leveraging non-blocking collective communication in24.
  13. S. Song and J. K. Hollingsworth, "Scaling parallel 3-D FFT with non-blocking MPI collectives," Proceedings of the 5th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems, IEEE Press, pp.1-8, November 16-21, 2014.
  14. V. Cardellini, A. Fanfarillo, and S. Filippone, "Overlapping communication with computation in MPI applications," Feb. 2016, [online] Available: http://hdl.handle.net/2108/140530.
  15. William D. Gropp and Rajeev Thakur, "Revealing the Performance of MPI RMAImplementations," EuroPVM/MPI 2007.