• 제목/요약/키워드: Graphics Processing Units (GPUs)

검색결과 28건 처리시간 0.024초

범용 그래픽 처리 장치의 메모리 설계를 위한 그래픽 처리 장치의 메모리 특성 분석 (Analysis on Memory Characteristics of Graphics Processing Units for Designing Memory System of General-Purpose Computing on Graphics Processing Units)

  • 최홍준;김철홍
    • 스마트미디어저널
    • /
    • 제3권1호
    • /
    • pp.33-38
    • /
    • 2014
  • 소비전력 증가와 같은 문제점들로 인하여, 마이크로프로세서만으로는 컴퓨팅 시스템의 성능을 향상시키기 점점 어려워지고 있다. 이와 같은 상황에서, 대용량 병렬 연산에 특화된 그래픽 처리 장치를 활용하여 중앙 처리 장치가 담당하던 범용 작업을 수행하게 하는 범용 그래픽 처리 장치 기술이 컴퓨터 시스템의 성능을 개선시킬 수 있는 방안으로 주목을 받고 있다. 하지만, 그래픽스 관련 응용프로그램과 범용 응용프로그램의 특징은 매우 상이하기 때문에, 그래픽 처리 장치가 범용 응용프로그램을 수행하는 경우에는 많은 제약 사항으로 인하여 자신의 뛰어난 연산 자원을 활용하지 못하는 실정이다. 일반적으로 그래픽스 관련 응용프로그램에 비해 범용 응용프로그램은 메모리를 매우 많이 요청하기 때문에 범용 그래픽 처리 장치 기술을 효율적으로 활용하기 위해서는 메모리 설계가 매우 중요하다. 특히, 긴 접근 시간을 요구하는 외부 메모리 요청은 성능에 큰 오버헤드이다. 그러므로 외부 메모리로의 접근 횟수를 줄일 수 있는 다중 레벨 캐쉬 구조를 효율적으로 활용할 수 있다면, GPU의 성능은 크게 향상 될 것이 분명하다. 본 논문에서는 다중 레벨 캐쉬 구조에 따른 그래픽 처리 장치의 성능을 다양한 벤치마크 프로그램을 통하여 정량적으로 분석하고자 한다.

GPU를 이용한 JPEG2000 병렬 알고리즘 (Parallel Processing Algorithm of JPEG2000 Using GPU)

  • 이동하;조시원;이동욱
    • 전기학회논문지
    • /
    • 제57권6호
    • /
    • pp.1075-1080
    • /
    • 2008
  • Most modem computers or game consoles are well equipped with powerful graphics processing units(GPUs) to accelerate graphics operations. However, since the graphics engines in these GPUs are specially designed for graphics operations, we could not take advantage of their computing power for more general nongraphic operations. In this paper, we studied the GPUs graphics engine in order to accelerate the image processing capability. Specifically, we implemented a JPEC2000 decoding/encoding framework that involves both OpenMP and GPU. Initial experimental results show that significant speed-up can be achieved by utilizing the GPU power.

High-Performance Korean Morphological Analyzer Using the MapReduce Framework on the GPU

  • Cho, Shi-Won;Lee, Dong-Wook
    • Journal of Electrical Engineering and Technology
    • /
    • 제6권4호
    • /
    • pp.573-579
    • /
    • 2011
  • To meet the scalability and performance requirements of data analyses, which often involve voluminous data, efficient parallel or concurrent algorithms and frameworks are essential. We present a high-performance Korean morphological analyzer which employs the MapReduce framework on the graphics processing unit (GPU). MapReduce is a programming framework introduced by Google to aid the development of web search applications on a large number of central processing units (CPUs). GPUs are designed as a special-purpose co-processor. Their programming interfaces are typically formulated for graphics applications. Compared to CPUs, GPUs have greater computation power and memory bandwidth; however, GPUs are more difficult to program because of the design of their architectures. The performance of the Korean morphological analyzer using the MapReduce framework on the GPU is evaluated in comparison with the CPU-based model. The proposed Korean Morphological analyzer shows promising scalable performance on distributed computing with the GPU.

Parallel Implementation of the Recursive Least Square for Hyperspectral Image Compression on GPUs

  • Li, Changguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권7호
    • /
    • pp.3543-3557
    • /
    • 2017
  • Compression is a very important technique for remotely sensed hyperspectral images. The lossless compression based on the recursive least square (RLS), which eliminates hyperspectral images' redundancy using both spatial and spectral correlations, is an extremely powerful tool for this purpose, but the relatively high computational complexity limits its application to time-critical scenarios. In order to improve the computational efficiency of the algorithm, we optimize its serial version and develop a new parallel implementation on graphics processing units (GPUs). Namely, an optimized recursive least square based on optimal number of prediction bands is introduced firstly. Then we use this approach as a case study to illustrate the advantages and potential challenges of applying GPU parallel optimization principles to the considered problem. The proposed parallel method properly exploits the low-level architecture of GPUs and has been carried out using the compute unified device architecture (CUDA). The GPU parallel implementation is compared with the serial implementation on CPU. Experimental results indicate remarkable acceleration factors and real-time performance, while retaining exactly the same bit rate with regard to the serial version of the compressor.

A Study of the Performance Prediction Models of Mobile Graphics Processing Units

  • Kim, Cheong Ghil
    • 반도체디스플레이기술학회지
    • /
    • 제18권1호
    • /
    • pp.123-128
    • /
    • 2019
  • Currently mobile services are on the verge of full commercialization ahead of 5G mobile communication (5G). The first goal could be to preempt the 5G market through realistic media services utilizing VR (Virtual Reality) and AR (Augmented Reality) technologies that users can most easily experience. Basically this movement is based on the advanced development of smart devices and high quality graphics processing computing power of mobile application processors. Accordingly, the importance of mobile GPUs is emerging and the most concern issue becomes a model for predicting the power and performance for smooth operation of high quality mobile contents. In many cases, the performance of mobile GPUs has been introduced in terms of power consumption of mobile GPUs using dynamic voltage and frequency scaling and throttling functions for power consumption and heat management. This paper introduces several studies of mobile GPU performance prediction model with user-friendly methods not like conventional power centric performance prediction models.

An Efficient Block Cipher Implementation on Many-Core Graphics Processing Units

  • Lee, Sang-Pil;Kim, Deok-Ho;Yi, Jae-Young;Ro, Won-Woo
    • Journal of Information Processing Systems
    • /
    • 제8권1호
    • /
    • pp.159-174
    • /
    • 2012
  • This paper presents a study on a high-performance design for a block cipher algorithm implemented on modern many-core graphics processing units (GPUs). The recent emergence of VLSI technology makes it feasible to fabricate multiple processing cores on a single chip and enables general-purpose computation on a GPU (GPGPU). The GPU strategy offers significant performance improvements for all-purpose computation and can be used to support a broad variety of applications, including cryptography. We have proposed an efficient implementation of the encryption/decryption operations of a block cipher algorithm, SEED, on off-the-shelf NVIDIA many-core graphics processors. In a thorough experiment, we achieved high performance that is capable of supporting a high network speed of up to 9.5 Gbps on an NVIDIA GTX285 system (which has 240 processing cores). Our implementation provides up to 4.75 times higher performance in terms of encoding and decoding throughput as compared to the Intel 8-core system.

GPU을 이용한 다중 고정 길이 패턴을 갖는 DNA 시퀀스에 대한 k-Mismatches에 의한 근사적 병열 스트링 매칭 (Parallel Approximate String Matching with k-Mismatches for Multiple Fixed-Length Patterns in DNA Sequences on Graphics Processing Units)

  • 호 티엔 루안;김현진;오승록
    • 전기학회논문지
    • /
    • 제66권6호
    • /
    • pp.955-961
    • /
    • 2017
  • In this paper, we propose a parallel approximate string matching algorithm with k-mismatches for multiple fixed-length patterns (PMASM) in DNA sequences. PMASM is developed from parallel single pattern approximate string matching algorithms to effectively calculate the Hamming distances for multiple patterns with a fixed-length. In the preprocessing phase of PMASM, all target patterns are binary encoded and stored into a look-up memory. With each input character from the input string, the Hamming distances between a substring and all patterns can be updated at the same time based on the binary encoding information in the look-up memory. Moreover, PMASM adopts graphics processing units (GPUs) to process the data computations in parallel. This paper presents three kinds of PMASM implementation methods in GPUs: thread PMASM, block-thread PMASM, and shared-mem PMASM methods. The shared-mem PMASM method gives an example to effectively make use of the GPU parallel capacity. Moreover, it also exploits special features of the CUDA (Compute Unified Device Architecture) memory structure to optimize the performance. In the experiments with DNA sequences, the proposed PMASM on GPU is 385, 77, and 64 times faster than the traditional naive algorithm, the shift-add algorithm and the single thread PMASM implementation on CPU. With the same NVIDIA GPU model, the performance of the proposed approach is enhanced up to 44% and 21%, compared with the naive, and the shift-add algorithms.

GPU를 이용한 실시간 이미지 프로세싱 시스템 (Development of Real-Time Image Processing System Using GPU)

  • 오재홍;강훈;이자용
    • 제어로봇시스템학회논문지
    • /
    • 제11권5호
    • /
    • pp.393-397
    • /
    • 2005
  • When a real-time image processing application is implemented with a general-purpose computer, CPU (Central Processing Unit) is usually heavily loaded and in many cases that CPU alone cannot meet the real-time requirement at all. Most modern computers are equipped with powerful Graphics Processing Units (GPUs) to accelerate graphics operations. There is a trend that the power of GPU outgrows that of CPU. If we take advantage of the powerful GPU for more general operations other than pure graphics operations, the processing time can be reduced. In this study, we will present techniques that apply GPU to general operations such as image processing procedures. Our experiment results show that significant speed-up can be achieved by using GPU.

GPU를 이용한 R-tree에서의 범위 질의의 병렬 처리 (Parallel Range Query processing on R-tree with Graphics Processing Units)

  • 유보선;김현덕;최원익;권동섭
    • 한국멀티미디어학회논문지
    • /
    • 제14권5호
    • /
    • pp.669-680
    • /
    • 2011
  • R-tree는 데이터베이스 시스템에서 가장 많이 사용되는 색인 구조로 다차원의 데이터를 관리하는데 매우 효율적이다. 하지만 데이터베이스 시스템이 처리해야 하는 데이터의 용량이 증가함에 따라, 기존의 R-tree에서의 범위 질의의 처리는 디스크의 접근 지연 등의 이유로 인하여 수행 시간이 증가하게 되었다. 이러한 문제들을 해결하기 위하여 버퍼를 사용하거나 혹은 다수의 디스크와 프로세서를 사용하여 병렬로 질의를 수행하고자 하는 많은 연구들이 진행되었다. 이러한 연구들의 일환으로 최근 Graphics Processing Unit(GPU)을 이용한 병렬화 기법들에 대한 연구들이 진행되고 있다. 이러한 GPU의 적용을 통한 병렬화는 계산 속도의 증가와 디스크 접근 횟수의 감소를 통하여 수행 속도의 개선을 가능하게 하지만 GPU와 CPU사이의 메모리 교환 및 GPU 메모리의 접근 지연 등에 의한 오버헤드를 발생시킨다. 본 논문에서는 이러한 오버헤드를 해결하고 효과적으로 GPU를 적용하기 위하여 GPU를 버퍼로 사용하여 범위 질의를 병렬화하는 기법을 제안하였다. 버퍼 알고리즘을 통하여 메모리 교환 횟수를 줄이고, 동시 접근 가능한 메모리의 용량을 증가시켜 메모리의 접근 지연을 최소화 할 수 있었다. 제안 기법과 기존의 인덱스의 비교 실험에서 최대의 경우 5배 정도의 성능이 개선되는 것을 확인 할 수 있었다.

Performance Improvement of Web Service Based on GPGPU and Task Queue

  • Kim, Changsu;Kim, Kyunghwan;Jung, Hoekyung
    • Journal of information and communication convergence engineering
    • /
    • 제19권4호
    • /
    • pp.257-262
    • /
    • 2021
  • Providing web services to users has become expensive in recent times. For better web services, a web server is provided with high-performance technology. To achieve great web service experiences, tools such as general-purpose graphics processing units (GPGPUs), artificial intelligence, high-performance computing, and three-dimensional simulation are widely used. However, graphics processing units (GPUs) are used in high-speed operations and have limited general applications. In this study, we developed a task queue in a GPU to improve the performance of a web service using a multiprocessor and studied how to receive and process user requests in bulk. We propose the use of a GPGPU-based task queue to process user requests more than GPGPU based a central processing unit thread, and to process more GPU threads on task queue at about 136% to 233%, and proved that the proposed method is effective for web service.