• Title/Summary/Keyword: Graphics Processing Units

Search Result 85, Processing Time 0.022 seconds

Graphics Processing Units 를 활용한 위성 임무스케줄링 기법 고안 시 고려사항

  • Lee, Su-Jeon;Lee, Byeong-Seon;Kim, Jae-Hun;Jo, Yeong-Min
    • Bulletin of the Korean Space Science Society
    • /
    • 2011.04a
    • /
    • pp.24.2-24.2
    • /
    • 2011
  • 천리안위성은 2010년 6월 27일에 발사되어 성공적으로 In Orbit Test (IOT)를 수행하고 있다. 천리안 위성을 지상에서 컨트롤 하기 위하여 ETRI 에서는 위성관제시스템을 개발하였으며, 현재 KARI에서 위성관제시스템을 운영중이다. 위성관제시스템의 일부인 임무계획 시스템은 기상/해양 이미지 촬영에 관한 임무요청, 위성체 기동 요청, 각동 이벤트 등을 종합하여 충돌 없는 임무스케줄을 만들어내게 되는데 이에 복잡한 스케줄링 기법이 요구된다. 천리안 위성의 임무 스케줄링 기법은 CPU 연산을 기본으로 하고 있으나, 이 논문에서는 Graphics Processing Units(GPU) 를 통한 임무 스케줄링 기법의 적용에 따르는 고려사항을 설명한다. 그리고 CPU 기반의 임무 스케줄링 기법과 GPU 기반의 임무 스케줄링 기법의 장단점을 분석한다.

  • PDF

Implementation of IQ/IDCT in H.264/AVC Decoder Using GP-GPU (GP-GPU를 이용한 H.264/AVC 디코더의 IQ/IDCT구현)

  • Jeong, Jun-Mo;Lee, Kwang-Yeob
    • Journal of IKEEE
    • /
    • v.14 no.2
    • /
    • pp.76-81
    • /
    • 2010
  • The need for dedicated hardware continue to decrease as the mobile CPU's performance increases. But, there is a limit to a mobile CPU's performance. GP-GPU(General-Purpose computing on Graphics Processing Units) can improve performance without adding other dedicated hardware. This paper presents the implementation of Inverse Quantization, Inverse DCT and Color Space Conversion module in H.264/AVC decoder using GP-GPU for a mobile environments. The proposed architecture improves approximately 40% of performance when it use all the features.

High-Speed Implementations of Block Ciphers on Graphics Processing Units Using CUDA Library (GPU용 연산 라이브러리 CUDA를 이용한 블록암호 고속 구현)

  • Yeom, Yong-Jin;Cho, Yong-Kuk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.3
    • /
    • pp.23-32
    • /
    • 2008
  • The computing power of graphics processing units(GPU) has already surpassed that of CPU and the gap between their powers is getting wider. Thus, research on GPGPU which applies GPU to general purpose becomes popular and shows great success especially in the field of parallel data processing. Since the implementation of cryptographic algorithm using GPU was started by Cook et at. in 2005, improved results using graphic libraries such as OpenGL and DirectX have been published. In this paper, we present skills and results of implementing block ciphers using CUDA library announced by NVIDIA in 2007. Also, we discuss a general method converting source codes of block ciphers on CPU to those on GPU. On NVIDIA 8800GTX GPU, the resulting speeds of block cipher AES, ARIA, and DES are 4.5Gbps, 7.0Gbps, and 2.8Gbps, respectively which are faster than the those on CPU.

Design of a Floating Point Unit for 3D Graphics Geometry Engine (3D 그래픽 Geometry Engine을 위한 부동소수점 연산기의 설계)

  • Kim, Myeong Hwm;Oh, Min Seok;Lee, Kwang Yeob;Kim, Won Jong;Cho, Han Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.10 s.340
    • /
    • pp.55-64
    • /
    • 2005
  • In this paper, we designed floating point units to accelate real-time 3D Graphics for Geometry processing. Designed floating point units support IEEE-754 single precision format and we confirmed 100 MHz performance of floating point add/mul unit, 120 MHz performance of floating point NR inverse division unit, 200 MHz performance of floating point power unit, 120 MHz performance of floating point inverse square root unit at Xilinx-vertex2. Also, using floating point units, designed Geometry processor and confirmed 3D Graphics data processing.

Use of High-performance Graphics Processing Units for Power System Demand Forecasting

  • He, Ting;Meng, Ke;Dong, Zhao-Yang;Oh, Yong-Taek;Xu, Yan
    • Journal of Electrical Engineering and Technology
    • /
    • v.5 no.3
    • /
    • pp.363-370
    • /
    • 2010
  • Load forecasting has always been essential to the operation and planning of power systems in deregulated electricity markets. Various methods have been proposed for load forecasting, and the neural network is one of the most widely accepted and used techniques. However, to obtain more accurate results, more information is needed as input variables, resulting in huge computational costs in the learning process. In this paper, to reduce training time in multi-layer perceptron-based short-term load forecasting, a graphics processing unit (GPU)-based computing method is introduced. The proposed approach is tested using the Korea electricity market historical demand data set. Results show that GPU-based computing greatly reduces computational costs.

Parallel Approximate String Matching with k-Mismatches for Multiple Fixed-Length Patterns in DNA Sequences on Graphics Processing Units (GPU을 이용한 다중 고정 길이 패턴을 갖는 DNA 시퀀스에 대한 k-Mismatches에 의한 근사적 병열 스트링 매칭)

  • Ho, ThienLuan;Kim, HyunJin;Oh, SeungRohk
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.6
    • /
    • pp.955-961
    • /
    • 2017
  • In this paper, we propose a parallel approximate string matching algorithm with k-mismatches for multiple fixed-length patterns (PMASM) in DNA sequences. PMASM is developed from parallel single pattern approximate string matching algorithms to effectively calculate the Hamming distances for multiple patterns with a fixed-length. In the preprocessing phase of PMASM, all target patterns are binary encoded and stored into a look-up memory. With each input character from the input string, the Hamming distances between a substring and all patterns can be updated at the same time based on the binary encoding information in the look-up memory. Moreover, PMASM adopts graphics processing units (GPUs) to process the data computations in parallel. This paper presents three kinds of PMASM implementation methods in GPUs: thread PMASM, block-thread PMASM, and shared-mem PMASM methods. The shared-mem PMASM method gives an example to effectively make use of the GPU parallel capacity. Moreover, it also exploits special features of the CUDA (Compute Unified Device Architecture) memory structure to optimize the performance. In the experiments with DNA sequences, the proposed PMASM on GPU is 385, 77, and 64 times faster than the traditional naive algorithm, the shift-add algorithm and the single thread PMASM implementation on CPU. With the same NVIDIA GPU model, the performance of the proposed approach is enhanced up to 44% and 21%, compared with the naive, and the shift-add algorithms.

Performance Improvement in HTTP Packet Extraction from Network Traffic using GPGPU (GPGPU 를 이용한 네트워크 트래픽에서의 HTTP 패킷 추출 성능 향상)

  • Han, SangWoon;Kim, Hyogon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.718-721
    • /
    • 2011
  • 웹 서비스를 대상으로 하는 DDoS(Distributed Denial-of-Service) 공격 또는 유해 트래픽 유입을 탐지 또는 차단하기 위한 목적으로 HTTP(Hypertext Transfer Protocol) 트래픽을 실시간으로 분석하는 기능은 거의 모든 네트워크 트래픽 보안 솔루션들이 탑재하고 있는 필수적인 요소이다. 하지만, HTTP 트래픽의 실시간 데이터 측정 양이 시간이 지날수록 기하급수적으로 증가함에 따라, HTTP 트래픽을 실시간 패킷 단위로 분석한다는 것에 대한 성능 부담감은 날로 커지고 있는 실정이다. 이제는 응용 어플리케이션 차원에서는 성능에 대한 부담감을 해소할 수 없기 때문에 고비용의 소프트웨어 가속기나 하드웨어에 의존적인 전용 장비를 탑재하여 해결하려는 시도가 대부분이다. 본 논문에서는 현재 대부분의 PC 에 탑재되어 있는 그래픽 카드의 GPU(Graphics Processing Units)를 범용적으로 활용하고자 하는 GPGPU(General-Purpose computation on Graphics Processing Units)의 연구에 힘입어, NVIDIA사의 CUDA(Compute Unified Device Architecture)를 사용하여 네트워크 트래픽에서 HTTP 패킷 추출성능을 응용 어플리케이션 차원에서 향상시켜 보고자 하였다. HTTP 패킷 추출 연산만을 기준으로 GPU 의 연산속도는 CPU 에 비해 10 배 이상의 높은 성능을 얻을 수 있었다.

Enhancement of H.264/AVC Encoding Speed and Reduction of CPU Load through Parallel Programming Based on CUDA (CUDA 기반의 병렬 프로그래밍을 통한 H.264/AVC 부호화 속도 향상 및 CPU 부하 경감)

  • Jang, Eun-Been;Ha, Yun-Su
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.858-863
    • /
    • 2010
  • In order to enhance encoding speed in dynamic image encoding using H.264/AVC, reducing the time for motion estimation which takes a large portion of the processing time is very important. An approach using graphics processing unit(GPU) as a coprocessor to assist the central processing unit(CPU) in computing massive data, will be a way to reduce the processing time. In this paper, we present an efficient block-level parallel algorithm for the motion estimation(ME) on a computer unified device architecture(CUDA) platform developed in general-purpose computation on GPU. Experiments are carried out to verify the effectiveness of the proposed algorithm.

Implementation of Lattice Reduction-aided Detector using GPU on SDR System (SDR 시스템에서 GPU를 사용한 Lattice Reduction-aided 검출기 구현)

  • Kim, Tae Hyun;Leem, Hyun Seok;Choi, Seung Won
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.3
    • /
    • pp.55-61
    • /
    • 2011
  • This paper presents an implementation of Lattice Reduction (LR)-aided detector for Multiple-Input Multiple-Output (MIMO) system using Graphics Processing Unit (GPU). GPU is a parallel processor which has a number of Arithmetic Logic Units (ALUs), thus, it can minimize the operation time of LR algorithm through the parallelization using multiple threads in the GPU. Through the implemented LR-aided detector, we verify that the LR-aided detector operates a lot faster than Maximum Likelihood (ML) detector. The implemented LR-aided detector has been applied to WiMAX system to show the feasibility of its real-time processing. In addition, we demonstrate that the processing time can be reduced at the cost of 3dB SNR loss by limiting the repeating loop in Lenstra-Lenstra-Lovasz (LLL) algorithm which is frequently used in LR-aided detector.

A Fully Programmable Shader Processor for Low Power Mobile Devices (저전력 모바일 장치를 위한 완전 프로그램 가능형 쉐이더 프로세서)

  • Jeong, Hyung-Ki;Lee, Joo-Sock;Park, Tae-Ryong;Lee, Kwang-Yeob
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.253-259
    • /
    • 2009
  • In this paper, we propose a novel architecture of a general graphics shader processor without a dedicated hardware. Recently, mobile devices require the high performance graphics processor as well as the small size, low power. The proposed shader processor is a GP-GPU(General-Purpose computing on Graphics Processing Units) to execute the whole OpenGL ES 2.0 graphics pipeline by using shader instructions. It does not require the separate dedicate H/W such as rasterization on this fully programmable capability. The fully programmable 3D graphics shader processor can reduce much of the graphics hardware. The chip size of the designed shader processor is reduced 60% less than the sizes of previous processors.

  • PDF