• Title/Summary/Keyword: General-purpose graphics processing unit

Search Result 48, Processing Time 0.03 seconds

A Study of Depth Estimate using GPGPU in Monocular Image (GPGPU를 이용한 단일 영상에서의 깊이 추정에 관한 연구)

  • Yoo, Tae Hoon;Lee, Gang Seong;Park, Young Soo;Lee, Jong Yong;Lee, Sang Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.12
    • /
    • pp.345-352
    • /
    • 2013
  • In this paper, a depth estimate method is proposed using GPU(Graphics Processing Unit) in monocular image. a monocular image is a 2D image with missing 3D depth information due to the camera projection and we used a monocular cue to recover the lost depth information by the projection present. The proposed algorithm uses an energy function which takes a variety of cues to create a more generalized and reliable depth map. But, a processing time is late because energy function is defined from the various monocular cues. Therefore, we propose a depth estimate method using GPGPU(General Purpose Graphics Processing Unit). The objective effectiveness of the algorithm is shown using PSNR(Peak Signal to Noise Ratio), a processing time is decrease by 61.22%.

A Study of How to Improve Execution Speed of Grabcut Using GPGPU (GPGPU를 이용한 Grabcut의 수행 속도 개선 방법에 관한 연구)

  • Kim, Ji-Hoon;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.379-386
    • /
    • 2014
  • In this paper, the processing speed of Grabcut algorithm in order to efficiently improve the GPU (Graphics Processing Unit) for processing the data from the method. Grabcut algorithm has excellent performance object detection algorithm. Grabcut existing algorithms to split the foreground area and the background area, and then background and foreground K-cluster is assigned a cluster. And assigned to gradually improve the results, until the process is repeated. But Drawback of Grabcut algorithm is the time consumption caused by the repetition of clustering. Thus GPGPU (General-Purpose computing on Graphics Processing Unit) using the repeated operations in parallel by processing Grabcut algorithm to effectively improve the processing speed of the method. We proposed method of execution time of the algorithm reduced the average of about 95.58%.

Implementation of a 3D Graphics Simulator for GP-GPU (GP-GPU 개발을 위한 3차원 그래픽 시뮬레이터 구현)

  • Yeo, Dong-young;Kim, Woo-young;Jung, Hyung-Ki;Lee, Kwang-Yeob
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.337-340
    • /
    • 2009
  • Since a hardware accelerator for 3D graphics processing GPU(Graphics Processing Unit)'s performance has been improving constantly. This is the efficient way was introduced for complex graphics application, but it is rarely used to utilize 100% resources on GPU. GP-GPU(general-purpose GPU), including operations on the GPU and supporting common operations can be handled by the processor, is noted by depending on the distribution of resources that can be effectively controlled. In this paper, the simulator was implemented that supports virtual environment of GP-GPU and available for program design and debugging. Through this, the co-design development environment support simultaneous design fast and reliable verification that are available to build the interface of three-dimensional graphics display.

  • PDF

Enhancement of H.264/AVC Encoding Speed and Reduction of CPU Load through Parallel Programming Based on CUDA (CUDA 기반의 병렬 프로그래밍을 통한 H.264/AVC 부호화 속도 향상 및 CPU 부하 경감)

  • Jang, Eun-Been;Ha, Yun-Su
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.34 no.6
    • /
    • pp.858-863
    • /
    • 2010
  • In order to enhance encoding speed in dynamic image encoding using H.264/AVC, reducing the time for motion estimation which takes a large portion of the processing time is very important. An approach using graphics processing unit(GPU) as a coprocessor to assist the central processing unit(CPU) in computing massive data, will be a way to reduce the processing time. In this paper, we present an efficient block-level parallel algorithm for the motion estimation(ME) on a computer unified device architecture(CUDA) platform developed in general-purpose computation on GPU. Experiments are carried out to verify the effectiveness of the proposed algorithm.

Introduction to general purpose GPU computing (GPU를 이용한 범용 계산의 소개)

  • Yu, Donghyeon;Lim, Johan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.1043-1061
    • /
    • 2013
  • Recent advances in computer technology introduce massive data and their analysis becomes important. The high performance computing is one of the most essential part in analysis of massive data. In this paper, we review the general purpose of the graphics processing unit and its application to parallel computing, which has been of great interest in statistics communities.

Development of Real-Time Image Processing System Using GPU (GPU를 이용한 실시간 이미지 프로세싱 시스템)

  • Oh Jae-Hong;Kang Hoon;Lee Ja-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.393-397
    • /
    • 2005
  • When a real-time image processing application is implemented with a general-purpose computer, CPU (Central Processing Unit) is usually heavily loaded and in many cases that CPU alone cannot meet the real-time requirement at all. Most modern computers are equipped with powerful Graphics Processing Units (GPUs) to accelerate graphics operations. There is a trend that the power of GPU outgrows that of CPU. If we take advantage of the powerful GPU for more general operations other than pure graphics operations, the processing time can be reduced. In this study, we will present techniques that apply GPU to general operations such as image processing procedures. Our experiment results show that significant speed-up can be achieved by using GPU.

An Efficient Block Cipher Implementation on Many-Core Graphics Processing Units

  • Lee, Sang-Pil;Kim, Deok-Ho;Yi, Jae-Young;Ro, Won-Woo
    • Journal of Information Processing Systems
    • /
    • v.8 no.1
    • /
    • pp.159-174
    • /
    • 2012
  • This paper presents a study on a high-performance design for a block cipher algorithm implemented on modern many-core graphics processing units (GPUs). The recent emergence of VLSI technology makes it feasible to fabricate multiple processing cores on a single chip and enables general-purpose computation on a GPU (GPGPU). The GPU strategy offers significant performance improvements for all-purpose computation and can be used to support a broad variety of applications, including cryptography. We have proposed an efficient implementation of the encryption/decryption operations of a block cipher algorithm, SEED, on off-the-shelf NVIDIA many-core graphics processors. In a thorough experiment, we achieved high performance that is capable of supporting a high network speed of up to 9.5 Gbps on an NVIDIA GTX285 system (which has 240 processing cores). Our implementation provides up to 4.75 times higher performance in terms of encoding and decoding throughput as compared to the Intel 8-core system.

An Implementation of a Video-Equipped Real-Time Fire Detection Algorithm Using GPGPU (GPGPU를 이용한 비디오 기반 실시간 화재감지 알고리즘 구현)

  • Shon, Dong-Koo;Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.1-10
    • /
    • 2014
  • This paper proposes a parallel implementation of the video based 4-stage fire detection algorithm using a general-purpose graphics processing unit (GPGPU) to support real-time processing of the high computational algorithm. In addition, this paper compares the performance of the GPGPU based fire detection implementation with that of the CPU implementation to show the effectiveness of the proposed method. Experimental results using five fire included videos with an SXGA ($1400{\times}1050$) resolution, the proposed GPGPU implementation achieves 6.6x better performance that the CPU implementation, showing 30.53ms per frame which satisfies real-time processing (30 frames per second, 30fps) of the fire detection algorithm.

A Study on a Declines in Performance by Memory Copy in CUDA (CUDA의 메모리 복사로 인한 성능 저하 연구)

  • Kang, Jihun;Lee, DaeWon;Kang, InSung;Yu, HeonChang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.135-138
    • /
    • 2013
  • GPGPU(General Purpose Graphics Processing Unit) 병렬처리 시스템인 CUDA(Compute Unified Device Architecture)는 컴퓨터에서의 고속 연산 처리를 위해 많이 사용되어왔다. CUDA에서 연산 처리를 하기 위해서는 CUDA의 특성을 이해해야 한다. CUDA는 CPU(Central Processing Unit)가 처리하는 Host 영역과 GPU(Graphics Processing Unit)가 처리하는 영역인 Device 영역이 존재하며, 이 두 영역간의 데이터 복사를 통해 연산 처리를 진행한다. 이런 구조적인 특성상 메인 메모리에서 GPU 메모리로 입력 데이터를 전달해야 GPU를 이용해 연산을 처리할 수 있는 구조를 가지고 있다. 하지만 이러한 처리 구조로 인해 연산 시간과 별도로 메인 메모리와 GPU 메모리간의 데이터 복사시간이 존재하며, 추가적으로 발생하는 메모리 복사 시간으로 인해 오버헤드가 발생하게 된다. 본 논문에서는 실험을 통해 메모리 복사 시간, 연산의 반복 횟수 그리고 연산의 복잡성이 전체 성능에 어떤 영향을 미치는지 논하고자 한다.

GPGPU Task Management Technique to Mitigate Performance Degradation of Virtual Machines due to GPU Operation in Cloud Environments (클라우드 환경에서 GPU 연산으로 인한 가상머신의 성능 저하를 완화하는 GPGPU 작업 관리 기법)

  • Kang, Jihun;Gil, Joon-Min
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.9
    • /
    • pp.189-196
    • /
    • 2020
  • Recently, GPU cloud computing technology applying GPU(Graphics Processing Unit) devices to virtual machines is widely used in the cloud environment. In a cloud environment, GPU devices assigned to virtual machines can perform operations faster than CPUs through massively parallel processing, which can provide many benefits when operating high-performance computing services in a variety of fields in a cloud environment. In a cloud environment, a GPU device can help improve the performance of a virtual machine, but the virtual machine scheduler, which is based on the CPU usage time of a virtual machine, does not take into account GPU device usage time, affecting the performance of other virtual machines. In this paper, we test and analyze the performance degradation of other virtual machines due to the virtual machine that performs GPGPU(General-Purpose computing on Graphics Processing Units) task in the direct path based GPU virtualization environment, which is often used when assigning GPUs to virtual machines in cloud environments. Then to solve this problem, we propose a GPGPU task management method for a virtual machine.