• 제목/요약/키워드: Graphics Processing Units (GPUs)

검색결과 28건 처리시간 0.023초

New Two-Level L1 Data Cache Bypassing Technique for High Performance GPUs

  • Kim, Gwang Bok;Kim, Cheol Hong
    • Journal of Information Processing Systems
    • /
    • 제17권1호
    • /
    • pp.51-62
    • /
    • 2021
  • On-chip caches of graphics processing units (GPUs) have contributed to improved GPU performance by reducing long memory access latency. However, cache efficiency remains low despite the facts that recent GPUs have considerably mitigated the bottleneck problem of L1 data cache. Although the cache miss rate is a reasonable metric for cache efficiency, it is not necessarily proportional to GPU performance. In this study, we introduce a second key determinant to overcome the problem of predicting the performance gains from L1 data cache based on the assumption that miss rate only is not accurate. The proposed technique estimates the benefits of the cache by measuring the balance between cache efficiency and throughput. The throughput of the cache is predicted based on the warp occupancy information in the warp pool. Then, the warp occupancy is used for a second bypass phase when workloads show an ambiguous miss rate. In our proposed architecture, the L1 data cache is turned off for a long period when the warp occupancy is not high. Our two-level bypassing technique can be applied to recent GPU models and improves the performance by 6% on average compared to the architecture without bypassing. Moreover, it outperforms the conventional bottleneck-based bypassing techniques.

클라우드 환경에서 GPU 연산으로 인한 가상머신의 성능 저하를 완화하는 GPGPU 작업 관리 기법 (GPGPU Task Management Technique to Mitigate Performance Degradation of Virtual Machines due to GPU Operation in Cloud Environments)

  • 강지훈;길준민
    • 정보처리학회논문지:컴퓨터 및 통신 시스템
    • /
    • 제9권9호
    • /
    • pp.189-196
    • /
    • 2020
  • 최근 클라우드 환경에서는 고성능 연산이 가능한 GPU(Graphics Processing Unit) 장치를 가상머신에게 적용한 GPU 클라우드 컴퓨팅 기술이 많이 사용되고 있다. 클라우드 환경에서 가상머신에게 할당된 GPU 장치는 대규모 병렬 처리를 통해 CPU보다 더 빠르게 연산을 수행할 수 있으며, 이로 인해 다양한 분야의 고성능 컴퓨팅 서비스들을 클라우드 환경에서 운용할 때 많은 이점을 얻을 수 있다. 클라우드 환경에서 GPU 장치는 가상머신의 성능 향상에 많은 도움을 주지만 가상머신의 CPU 사용 시간을 기반으로 작동하는 가상머신 스케줄러에서는 GPU 장치의 사용 시간이 고려되지 않아 다른 가상머신들의 성능에 영향을 미친다. 본 논문에서는 클라우드 환경에서 가상머신에게 GPU를 할당할 때 많이 사용되는 직접 통로기반 GPU 가상화 환경에서 GPGPU(General-Purpose computing on Graphics Processing Units) 작업을 수행하는 가상머신으로 인한 다른 가상머신들의 성능 저하 현상을 검증하고 분석하며, 이를 해결하기 위한 가상머신의 GPGPU 작업 관리 기법을 제안한다.

Warp-Based Load/Store Reordering to Improve GPU Time Predictability

  • Huangfu, Yijie;Zhang, Wei
    • Journal of Computing Science and Engineering
    • /
    • 제11권2호
    • /
    • pp.58-68
    • /
    • 2017
  • While graphics processing units (GPUs) can be used to improve the performance of real-time embedded applications that require high throughput, it is challenging to estimate the worst-case execution time (WCET) of GPU programs, because modern GPUs are designed for improving the average-case performance rather than time predictability. In this paper, a reordering framework is proposed to regulate the access to the GPU data cache, which helps to improve the accuracy of the estimation of GPU L1 data cache miss rate with low performance overhead. Also, with the improved cache miss rate estimation, tighter WCET estimations can be achieved for GPU programs.

Workload Characteristics-based L1 Data Cache Switching-off Mechanism for GPUs

  • Do, Thuan Cong;Kim, Gwang Bok;Kim, Cheol Hong
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권10호
    • /
    • pp.1-9
    • /
    • 2018
  • Modern graphics processing units (GPUs) have become one of the most attractive platforms in exploiting high thread level parallelism with the support of new programming tools such as CUDA and OpenCL. Recent GPUs has applied cache hierarchy to support irregular memory access patterns; however, L1 data cache (L1D) exhibits poor efficiency in the GPU. This paper shows that the L1D does not always positively affect the applications in terms of performance and energy efficiency for the GPU. The performance of the GPU is even harmed by using the L1D for lots of applications. Our proposed technique exploits the characteristics of the currently-executed applications to predict the performance impact of the L1D on the GPU and then decides whether to continuously use the cache for the application or not. Our experimental results show that the proposed technique improves the GPU performance by 9.4% and saves up to 52.1% of the power consumption in the L1D.

인공지능프로세서 기술 동향 (Trends in AI Processor Technology)

  • 이미영;정재훈;이주현;한진호;권영수
    • 전자통신동향분석
    • /
    • 제35권3호
    • /
    • pp.66-75
    • /
    • 2020
  • As the increasing expectations of a practical AI (Artificial Intelligence) service makes AI algorithms more complicated, an efficient processor to process AI algorithms is required. To meet this requirement, processors optimized for parallel processing, such as GPUs (Graphics Processing Units), have been widely employed. However, the GPU has a generalized structure for various applications, so it is not optimized for the AI algorithm. Therefore, research on the development of AI processors optimized for AI algorithm processing has been actively conducted. This paper briefly introduces an AI processor especially for inference acceleration, developed by the Electronics and Telecommunications Research Institute, South Korea., and other global vendors for mobile and server platforms. However, the GPU has a generalized structure for various applications, so it is not optimized for the AI algorithm. Therefore, research on the development of AI processors optimized for AI algorithm processing has been actively conducted.

그래픽처리장치를 이용한 레이놀즈 방정식의 수치 해석 가속화 (Accelerating Numerical Analysis of Reynolds Equation Using Graphic Processing Units)

  • 명훈주;강지훈;오광진
    • Tribology and Lubricants
    • /
    • 제28권4호
    • /
    • pp.160-166
    • /
    • 2012
  • This paper presents a Reynolds equation solver for hydrostatic gas bearings, implemented to run on graphics processing units (GPUs). The original analysis code for the central processing unit (CPU) was modified for the GPU by using the compute unified device architecture (CUDA). The red-black Gauss-Seidel (RBGS) algorithm was employed instead of the original Gauss-Seidel algorithm for the iterative pressure solver, because the latter has data dependency between neighboring nodes. The implemented GPU program was tested on the nVidia GTX580 system and compared to the original CPU program on the AMD Llano system. In the iterative pressure calculation, the implemented GPU program showed 20-100 times faster performance than the original CPU codes. Comparison of the wall-clock times including all of pre/post processing codes showed that the GPU codes still delivered 4-12 times faster performance than the CPU code for our target problem.

Integer-Pel Motion Estimation for HEVC on Compute Unified Device Architecture (CUDA)

  • Lee, Dongkyu;Sim, Donggyu;Oh, Seoung-Jun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제3권6호
    • /
    • pp.397-403
    • /
    • 2014
  • A new video compression standard called High Efficiency Video Coding (HEVC) has recently been released onto the market. HEVC provides higher coding performance compared to previous standards, but at the cost of a significant increase in encoding complexity, particularly in motion estimation (ME). At the same time, the computing capabilities of Graphics Processing Units (GPUs) have become more powerful. This paper proposes a parallel integer-pel ME (IME) algorithm for HEVC on GPU using the Compute Unified Device Architecture (CUDA). In the proposed IME, concurrent parallel reduction (CPR) is introduced. CPR performs several parallel reduction (PR) operations concurrently to solve two problems in conventional PR; low thread utilization and high thread synchronization latency. The proposed encoder reduces the portion of IME in the encoder to almost zero with a 2.3% increase in bitrate. In terms of IME, the proposed IME is up to 172.6 times faster than the IME in the HEVC reference model.

From WiFi to WiMAX: Efficient GPU-based Parameterized Transceiver across Different OFDM Protocols

  • Li, Rongchun;Dou, Yong;Zhou, Jie;Li, Baofeng;Xu, Jinbo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권8호
    • /
    • pp.1911-1932
    • /
    • 2013
  • Orthogonal frequency-division multiplexing (OFDM) has become a popular modulation scheme for wireless protocols because of its spectral efficiency and robustness against multipath interference. Although the components of various OFDM protocols are functionally similar, they remain distinct because of the characteristics of the environment. Recently, graphics processing units (GPUs) have been used to accelerate the signal processing of the physical layer (PHY) because of their great computational power, high development efficiency, and flexibility. In this paper, we describe the implementation of parameterized baseband modules using GPUs for two different OFDM protocols, namely, 802.11a and 802.16. First, we introduce various modules in the modulator/demodulator parts of the transmitter and receiver and analyze the computational complexity of each module. We then describe the integration of the GPU-based baseband modules of the two protocols using the parameterized method. GPU-based implementations are addressed to explain how to accelerate the baseband processing to archive real-time throughput. Finally, the performance results of each signal processing module are evaluated and analyzed. The experiments show that the GPU-based 802.11a and 802.16 PHY meet the real-time requirement and demonstrate good bit error ratio (BER) performance. The performance comparison indicates that our GPU-based implemented modules have better flexibility and throughput to the current ones.

A Review of Computational Phantoms for Quality Assurance in Radiology and Radiotherapy in the Deep-Learning Era

  • Peng, Zhao;Gao, Ning;Wu, Bingzhi;Chen, Zhi;Xu, X. George
    • Journal of Radiation Protection and Research
    • /
    • 제47권3호
    • /
    • pp.111-133
    • /
    • 2022
  • The exciting advancement related to the "modeling of digital human" in terms of a computational phantom for radiation dose calculations has to do with the latest hype related to deep learning. The advent of deep learning or artificial intelligence (AI) technology involving convolutional neural networks has brought an unprecedented level of innovation to the field of organ segmentation. In addition, graphics processing units (GPUs) are utilized as boosters for both real-time Monte Carlo simulations and AI-based image segmentation applications. These advancements provide the feasibility of creating three-dimensional (3D) geometric details of the human anatomy from tomographic imaging and performing Monte Carlo radiation transport simulations using increasingly fast and inexpensive computers. This review first introduces the history of three types of computational human phantoms: stylized medical internal radiation dosimetry (MIRD) phantoms, voxelized tomographic phantoms, and boundary representation (BREP) deformable phantoms. Then, the development of a person-specific phantom is demonstrated by introducing AI-based organ autosegmentation technology. Next, a new development in GPU-based Monte Carlo radiation dose calculations is introduced. Examples of applying computational phantoms and a new Monte Carlo code named ARCHER (Accelerated Radiation-transport Computations in Heterogeneous EnviRonments) to problems in radiation protection, imaging, and radiotherapy are presented from research projects performed by students at the Rensselaer Polytechnic Institute (RPI) and University of Science and Technology of China (USTC). Finally, this review discusses challenges and future research opportunities. We found that, owing to the latest computer hardware and AI technology, computational human body models are moving closer to real human anatomy structures for accurate radiation dose calculations.

BioFET 시뮬레이션을 위한 CUDA 기반 병렬 Bi-CG 행렬 해법 (CUDA-based Parallel Bi-Conjugate Gradient Matrix Solver for BioFET Simulation)

  • 박태정;우준명;김창헌
    • 전자공학회논문지CI
    • /
    • 제48권1호
    • /
    • pp.90-100
    • /
    • 2011
  • 본 연구에서는 연산 부하가 매우 큰 Bio-FET 시뮬레이션을 위해 낮은 비용으로 대규모 병렬처리 환경 구축이 가능한 최신 그래픽 프로세서(GPU)를 이용해서 선형 방정식 해법을 수행하기 위한 병렬 Bi-CG(Bi-Conjugate Gradient) 방식을 제안한다. 제안하는 병렬 방식에서는 반도체 소자 시뮬레이션, 전산유체역학(CFD), 열전달 시뮬레이션 등을 포함한 다양한 분야에서 많은 연산량이 집중되어 전체 시뮬레이션에 필요한 시간을 증가시키는 포아송(Poisson) 방정식의 해를 병렬 방식으로 구한다. 그 결과, 이 논문의 테스트에서 사용된 FDM 3차원 문제 공간에서 단일 CPU 대비 연산 속도가 최대 30 배 이상 증가했다. 실제 구현은 NVIDIA의 태슬라 아키텍처(Tesla Architecture) 기반 GPU에서 범용 목적으로 병렬 프로그래밍이 가능한 NVIDIA사의 CUDA(Compute Unified Device Architecture) 환경에서 수행되었으며 기존 연구가 주로 32 비트 정밀도(single floating point) 실수 범위에서 수행된 것과는 달리 본 연구는 64 비트 정밀도(double floating point) 실수 범위로 수행되어 Bi-CG 해법의 수렴성을 개선했다. 특히, CUDA는 비교적 코딩이 쉬운 반면, 최적화가 어려운 특성이 있어 본 논문에서는 제안하는 Bi-CG 해법에서의 최적화 방향도 논의한다.