• Title/Summary/Keyword: NURION Supercomputer

Search Result 8, Processing Time 0.019 seconds

Analysis of Traffic and Attack Frequency in the NURION Supercomputing Service Network (누리온 슈퍼컴퓨팅서비스 네트워크에서 트래픽 및 공격 빈도 분석)

  • Lee, Jae-Kook;Kim, Sung-Jun;Hong, Taeyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.5
    • /
    • pp.113-120
    • /
    • 2020
  • KISTI(Korea Institute of Science and Technology Information) provides HPC(High Performance Computing) service to users of university, institute, government, affiliated organization, company and so on. The NURION, supercomputer that launched its official service on Jan. 1, 2019, is the fifth supercomputer established by the KISTI. The NURION has 25.7 petaflops computation performance. Understanding how supercomputing services are used and how researchers are using is critical to system operators and managers. It is central to monitor and analysis network traffic. In this paper, we briefly introduce the NURION system and supercomputing service network with security configuration. And we describe the monitoring system that checks the status of supercomputing services in real time. We analyze inbound/outbound traffics and abnormal (attack) IP addresses data that are collected in the NURION supercomputing service network for 11 months (from January to November 1919) using time series and correlation analysis method.

Implementation of automatic checking function of calculation node of Supercomputer 5th using Hook of PBS job scheduler (PBS 작업 스케줄러 Hook를 이용한 슈퍼컴퓨터 5호기 계산노드 자동 점검 기능 구현)

  • Kwon, Min-Woo;Yoon, JunWeon;Hong, TaeYoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.101-102
    • /
    • 2019
  • 한국과학기술정보연구원(KISTI)은 2019년 1월 슈퍼컴퓨터 5호기 Nurion의 공식서비스를 시작하였다. Nurion은 8,432개의 계산노드를 장착한 초거대 컴퓨팅 시스템으로 안정적인 운영을 위해 많은 인력을 필요로 하는 시스템이다. 본 논문에서는 Nurion에서 사용 중인 PBS 작업 스케줄러의 Hook 기능을 이용하여 계산노드의 장애를 자동으로 점검하는 기능을 구현하여 운영 효율을 향상시키는 기법에 대해서 소개한다.

Horizon Run 5: the largest cosmological hydrodynamic simulation

  • Kim, Juhan;Shin, Jihye;Snaith, Owain;Lee, Jaehyun;Kim, Yonghwi;Kwon, Oh-Kyung;Park, Chan;Park, Changbom
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.33.2-33.2
    • /
    • 2019
  • Horizon Run 5 is the most massive cosmological hydrodynamic simulation ever performed until now. Owing to the large spatial volume ($717{\times}80{\times}80[cMpc/h]^3$) and the high resolution down to 1 kpc, we may study the cosmological effects on star and galaxy formations over a wide range of mass scales from the dwarf to the cluster. We have modified the public available Ramses code to harness the power of the OpenMP parallelism, which is necessary for running simulations in such a huge KISTI supercomputer called Nurion. We have reached z=2.3 from z=200 for a given simulation period of 50 days using 2500 computing nodes of Nurion. During the simulation run, we have saved snapshot data at 97 redshifts and two light cone space data, which will be used later for the study of various research fields in galaxy formation and cosmology. We will close this talk by listing possible research topics that will play a crucial role in helping us take lead in those areas.

  • PDF

MPI-GWAS: a supercomputing-aided permutation approach for genome-wide association studies

  • Paik, Hyojung;Cho, Yongseong;Cho, Seong Beom;Kwon, Oh-Kyoung
    • Genomics & Informatics
    • /
    • v.20 no.1
    • /
    • pp.14.1-14.4
    • /
    • 2022
  • Permutation testing is a robust and popular approach for significance testing in genomic research that has the advantage of reducing inflated type 1 error rates; however, its computational cost is notorious in genome-wide association studies (GWAS). Here, we developed a supercomputing-aided approach to accelerate the permutation testing for GWAS, based on the message-passing interface (MPI) on parallel computing architecture. Our application, called MPI-GWAS, conducts MPI-based permutation testing using a parallel computing approach with our supercomputing system, Nurion (8,305 compute nodes, and 563,740 central processing units [CPUs]). For 107 permutations of one locus in MPI-GWAS, it was calculated in 600 s using 2,720 CPU cores. For 107 permutations of ~30,000-50,000 loci in over 7,000 subjects, the total elapsed time was ~4 days in the Nurion supercomputer. Thus, MPI-GWAS enables us to feasibly compute the permutation-based GWAS within a reason-able time by harnessing the power of parallel computing resources.

A Study of Dark Photon at the Electron-Positron Collider Experiments Using KISTI-5 Supercomputer

  • Park, Kihong;Cho, Kihyeon
    • Journal of Astronomy and Space Sciences
    • /
    • v.38 no.1
    • /
    • pp.55-63
    • /
    • 2021
  • The universe is well known to be consists of dark energy, dark matter and the standard model (SM) particles. The dark matter dominates the density of matter in the universe. The dark matter is thought to be linked with dark photon which are hypothetical hidden sector particles similar to photons in electromagnetism but potentially proposed as force carriers. Due to the extremely small cross-section of dark matter, a large amount of data is needed to be processed. Therefore, we need to optimize the central processing unit (CPU) time. In this work, using MadGraph5 as a simulation tool kit, we examined the CPU time, and cross-section of dark matter at the electron-positron collider considering three parameters including the center of mass energy, dark photon mass, and coupling constant. The signal process pertained to a dark photon, which couples only to heavy leptons. We only dealt with the case of dark photon decaying into two muons. We used the simplified model which covers dark matter particles and dark photon particles as well as the SM particles. To compare the CPU time of simulation, one or more cores of the KISTI-5 supercomputer of Nurion Knights Landing and Skylake and a local Linux machine were used. Our results can help optimize high-energy physics software through high-performance computing and enable the users to incorporate parallel processing.

A Study on Performance Comparison of Computational Structural Engineering using the NURION Supercomputer (슈퍼컴퓨터 누리온을 활용한 전산구조공학 성능 비교 연구)

  • Lee, Jae-Kook;An, Do-Sik;Hong, Taeyoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.98-101
    • /
    • 2020
  • 누리온 시스템은 한국과학기술정보연구원(KISTI) 국가슈퍼컴퓨팅센터에서 2018년 5번째로 구축하여 운영하고 있는 슈퍼컴퓨터이다. 본 논문에서는 전산구조공학 분야에서 많이 활용되고 있는 ABAQUS, NASTRAN 등과 같은 응용소프트웨어를 누리온 시스템에서 활용하기 위한 방법을 소개하고 간단한 전산구조공학 모델을 누리온 시스템과 기존에 운영되었던 슈퍼컴퓨터 4호기 신바람 시스템에서 ABAQUS를 이용하여 분석하고 성능을 비교한다.

Deployment and Performance Analysis of Data Transfer Node Cluster for HPC Environment (HPC 환경을 위한 데이터 전송 노드 클러스터 구축 및 성능분석)

  • Hong, Wontaek;An, Dosik;Lee, Jaekook;Moon, Jeonghoon;Seok, Woojin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.197-206
    • /
    • 2020
  • Collaborative research in science applications based on HPC service needs rapid transfers of massive data between research colleagues over wide area network. With regard to this requirement, researches on enhancing data transfer performance between major superfacilities in the U.S. have been conducted recently. In this paper, we deploy multiple data transfer nodes(DTNs) over high-speed science networks in order to move rapidly large amounts of data in the parallel filesystem of KISTI's Nurion supercomputer, and perform transfer experiments between endpoints with approximately 130ms round trip time. We have shown the results of transfer throughput in different size file sets and compared them. In addition, it has been confirmed that the DTN cluster with three nodes can provide about 1.8 and 2.7 times higher transfer throughput than a single node in two types of concurrency and parallelism settings.

The Improvement of Computational Efficiency in KIM by an Adaptive Time-step Algorithm (적응시간 간격 알고리즘을 이용한 KIM의 계산 효율성 개선)

  • Hyun Nam;Suk-Jin Choi
    • Atmosphere
    • /
    • v.33 no.4
    • /
    • pp.331-341
    • /
    • 2023
  • A numerical forecasting models usually predict future states by performing time integration considering fixed static time-steps. A time-step that is too long can cause model instability and failure of forecast simulation, and a time-step that is too short can cause unnecessary time integration calculations. Thus, in numerical models, the time-step size can be determined by the CFL (Courant-Friedrichs-Lewy)-condition, and this condition acts as a necessary condition for finding a numerical solution. A static time-step is defined as using the same fixed time-step for time integration. On the other hand, applying a different time-step for each integration while guaranteeing the stability of the solution in time advancement is called an adaptive time-step. The adaptive time-step algorithm is a method of presenting the maximum usable time-step suitable for each integration based on the CFL-condition for the adaptive time-step. In this paper, the adaptive time-step algorithm is applied for the Korean Integrated Model (KIM) to determine suitable parameters used for the adaptive time-step algorithm through the monthly verifications of 10-day simulations (during January and July 2017) at about 12 km resolution. By comparing the numerical results obtained by applying the 25 second static time-step to KIM in Supercomputer 5 (Nurion), it shows similar results in terms of forecast quality, presents the maximum available time-step for each integration, and improves the calculation efficiency by reducing the number of total time integrations by 19%.