• Title/Summary/Keyword: infiniband

Search Result 16, Processing Time 0.027 seconds

Implementation of Storage Service Protocol on Infiniband based Network (인피니밴드 네트웍에서 RDMA 기반의 저장장치 서비스 프로토콜개발)

  • Joen Ki-Man;Park Chang-Won;Kim Young-Hwan
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2006.08a
    • /
    • pp.77-81
    • /
    • 2006
  • Because of the rapid increasing of network user, there are some problems to tolerate the network overhead. Recently, the research and technology of the user-level for high performance and low latency than TCP/IP which relied upon the kernel for processing the messages. For example, there is an Infiniband technology. The Infiniband Trade Association (IBTA) has been proposed as an industry standard for both communication between processing node and I/O devices and for inter-processor communication. It replaces the traditional bus-based interconnect with a switch-based network for connecting processing node and I/O devices. Also Infiniband uses RDMA (Remote DMA) for low latency of CPU and OS to communicate between Remote nodes. In this paper, we develop the SRP (SCSI RDMA Protocol) which is Storage Access Protocol on Infiniband network. And will compare to FC (Fibre Channle) based I-SCSI (Internet SCSI) that it is used to access storage on Etherent Fabric.

  • PDF

PERFORMANCE ANALYSIS OF THE PARALLEL CUPID CODE IN DISTRIBUTED MEMORY SYSTEM BASED ETHERNET AND INFINIBAND NETWORK (이더넷과 인피니밴드 네트워크 기반의 분산 메모리 시스템에서 병렬성능 분석)

  • Jeon, B.J.;Choi, H.G.
    • Journal of computational fluids engineering
    • /
    • v.19 no.2
    • /
    • pp.24-29
    • /
    • 2014
  • In this study, a parallel performance of CUPID-code has been investigated for both Ethernet and Infiniband network system to examine the effect of cache memory and network-speed. Bi-conjugate gradient solver of CUPID-code has been parallelised by using domain decomposition method and message passing interface (MPI). It is shown that the parallel performance of Ethernet-network system is worse than that of Infiniband-network system due to the slow network-speed and a small cache memory. It is also found that the parallel performance of each system deteriorates for a small problem due to the communication overhead, but the performance of Infiniband-network system is better than Ethernet-network system due to a much faster network-speed. For a large problem, the parallel performance depends less on network system.

Application level performance evaluation of Infiniband Network (인피니밴드 네트워크에 대한 응용 레벨 성능 분석)

  • Cha, Kwang-Ho;Kim, Sung-Ho
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.1003-1005
    • /
    • 2005
  • 클러스터 시스템의 성능과 관련되어 중요성이 강조되는 분야로 SAN(System Area Network)을 이야기할 수 있다. 특히 infiniband 제품의 출시는 이 분야에 대한 관심을 더욱 집중시키는 계기가 되었다. 이에 본 논문에서는 Infiniband가 보여주는 단순한 네트워크적인 특징 이외에 응용 프로그램의 성능에 어떠한 영향을 미치는가에 대하여 실험하고 분석하였다.

  • PDF

Design of OpenStack Cloud Storage Systems - Applying Infiniband Storage Network and Storage Virtualization Performance Evaluation (인피니밴드 스토리지 네트워크를 적용한 오픈스택 클라우드 스토리지 시스템의 설계 및 스토리지 가상화 성능평가)

  • Heo, Hui-Seong;Lee, Kwang-Soo;Pirahandeh, Mehdi;Kim, Deok-Hwan
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.7
    • /
    • pp.470-475
    • /
    • 2015
  • Openstack is an open source software that enables developers to build IaaS(Infrastructure as a Service) cloud platforms. Openstack can virtualize servers, networks and storages, and provide them to users. This paper proposes the structure of Openstack cloud storage system applying Infiniband to solve bottlenecking that may occur between server and storage nodes when the server performs an I/O operation. Furthermore, we implement all flash array based high-performance Cinder storage volumes which can be used at Nova virtual machines by applying distributed RAID-60 structures to three 8-bay SSD storages and show that Infiniband storage networks applied to Openstack is suitable for virtualizing high-performance storage.

All Flash Array Storage Virtualisation using SCST (SCST를 이용한 All Flash Array 스토리지 가상화)

  • Heo, Huiseong;Pirahandeh, Mehdi;Lee, Kwangsoo;Kim, Deokhwan
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.10
    • /
    • pp.525-533
    • /
    • 2014
  • SCST(The generic SCSI target subsystem for Linux) enables developers to make SCSI target storage and supports various SCSI network protocol such as iSCSI, FC, SRP. In this paper, we propose storage virtualization method using SCST and virtualize all flash array as high performance storage through 4Gb Fiber Channel, 10Gb Ethernet and 40Gb Infiniband and evaluate their performance, respectively. Experimental result shows that 40Gb infiniband network appliance have better performance than others. In case of sequential/random read, 40Gb infiniband network appliance shows 78% and 79% of local all flash array performance attached to SCSI target system. In case of sequential/random write, it shows 83% and 88% of local flash array performance attached to SCSI target system.

Implementation of Ring Topology Interconnection Network with PCIe Non-Transparent Bridge Interface (PCIe Non-Transparent Bridge 인터페이스 기반 링 네트워크 인터커넥트 시스템 구현)

  • Kim, Sang-Gyum;Lee, Yang-Woo;Lim, Seung-Ho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.3
    • /
    • pp.65-72
    • /
    • 2019
  • HPC(High Performance Computing) is the computing system that connects a number of computing nodes with high performance interconnect network. In the HPC, interconnect network technology is one of the key player to make high performance systems, and mainly, Infiniband or Ethernet are used for interconnect network technology. Nowadays, PCIe interface is main interface within computer system in that host CPU connects high performance peripheral devices through PCIe bridge interface. For connecting between two computing nodes, PCIe Non-Transparent Bridge(NTB) standard can be used, however it basically connects only two hosts with its original standards. To give cost-effective interconnect network interface with PCIe technology, we develop a prototype of interconnect network system with PCIe NTB. In the prototyped system, computing nodes are connected to each other via PCIe NTB interface constructing switchless interconnect network such as ring network. Also, we have implemented prototyped data sharing mechanism on the prototyped interconnect network system. The designed PCIe NTB-based interconnect network system is cost-effective as well as it provides competitive data transferring bandwidth within the interconnect network.

Development of Monitoring Tool for Small SMP Cluster System (소규모 SMP 클러스터 시스템 모니터링 개발)

  • Sung, JinWoo;Lee, YoungJoo;Choi, YounKeun;Park, ChanYeol
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.535-538
    • /
    • 2007
  • System manager needs monitoring tool(S/W) to manage cluster system. But, it is difficult to decide suitable monitoring tool for small SMP cluster system. This document described design of monitoring tool(mon) and development. Mon is monitoring tool for small SMP cluster system using InfiniBand network switch. Function of this tool is monitoring such as computing node(7 node), Infiniband network switch and monitoring of PBS job.

  • PDF

A Performance Evaluation for SDP(Socket Direct Protocol) in Channel based Network (고속 채널 기반 네트웍에서 SDP 프로토콜 성능 평가)

  • Park, Chang-Won;Kim, Young-Hwan
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.3 no.2
    • /
    • pp.18-25
    • /
    • 2004
  • As using of network Increases rapidly, performance of system has been deteriorating because of the overhead and bottleneck. Nowadays, high speed I/O network standard, that is a sort of InfiniBand, PCI Express and so on, has come out to improve the limites of traditional I/O bus. The InfiniBand provides some protocols to service the applications such as SDP, SRP and IPoIB. In our paper, We explain the architecture of SDP(Socket Direct Protocol) and its features in channel based I/O network. And so, we provide a result of performance evaluation of SDP which is compared with current network protocol. Our experimental results also show that SDP is better than TCP/IP protocol.

  • PDF

Implementation of Non-Transparent Bridge Interface-based Ring Topology (Non-Transparent Bridge 기반 링 네트워크 통신 방식 구현)

  • Kim, Sang-Gyum;Lee, Yang-Woo;Lim, Seung-Ho
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.67-70
    • /
    • 2018
  • 다수의 계산노드를 초고성능 상호연결망으로 연결하여 클러스터 시스템으로 구성하는 HPC에서 인터 콘넥션 기술로 Infiniband, Ethernet등의 기술이 많이 사용된다. PCIe 기반의 노드간 직접 연결 기술로는 NTB(Non-Transparent Bridge) 기반의 인터콘넥션 기술이 있으나, NTB의 기본은 두 노드간에 분리된 메모리를 공유하는 방식이다. 본 논문에서는 다중 NTB 포트에 직접 연결된 다수의 호스트들간에 무스위치 네트워크를 구성하여 NTB 통신을 이용한 데이터 공유 방법의 설계와 구현에 대해서 다룬다. 각 호스트에 연결된 두 개의 NTB포트를 이용해서 링 네트워크를 구성하고, 링 네트워크 상에서 NTB 인터컨넥션을 이용한 데이터 공유 방식의 구현을 하였다. 이와 같이 PCIe 기반 무스위치 네트워크를 통해서 Cost-Effective한 HPC 상호연결망을 구성할 수 있다.

High Performance Network Benchmarking Based on Linux Cluster System (리눅스 클러스터 시스템 기반의 고성능 연결 망 벤치마크)

  • Hong, In-Pyo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10b
    • /
    • pp.468-471
    • /
    • 2007
  • 여러 대의 컴퓨팅 시스템을 네트워크로 연결한 클러스터 시스템의 경우, 뛰어난 가격대비 성능 특성 때문에 많은 공학분야의 연구에 널리 활용 되고 있다. 최근에는 뛰어난 확장성과 안정성을 요구하는 기업체 업무에도 널리 활용 되고 있다. 이러한 클러스터 시스템의 성능은 네트워크 시스템의 성능에 크게 좌우되므로 고성능 네트워크 시스템에 대한 연구는 지속적으로 수행되고 있다. 하지만 새로운 네트워크 시스템의 성능이 실제 응용 소프트웨어들의 성능에 어떠한 영향을 주는지 파악하기란 쉽지가 않다. 따라서 여러 개의 응용 소프트웨어 및 계산 소프트웨어에 대한 서비스를 제공해야 하는 기업업무 환경하에서 클러스터 시스템 기반의 새로운 고성능 네트워크 시스템 선택 시 벤치마크는 필수적이다. 본 연구에서는 최근에 출시된 고성능 네트워크 시스템 (Infiniband, Myrinet)들에 대해서 효율적인 노드들간 데이터 통신의 성능을 벤치마크 툴을 통하여 그 결과를 비교 분석하고자 한다.

  • PDF