• Title/Summary/Keyword: Graphic Processing Unit

Search Result 117, Processing Time 0.029 seconds

Acceleration of the Iterative Physical Optics Using Graphic Processing Unit (GPU를 이용한 반복적 물리 광학법의 가속화에 대한 연구)

  • Lee, Yong-Hee;Chin, Huicheol;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.26 no.11
    • /
    • pp.1012-1019
    • /
    • 2015
  • This paper shows the acceleration of iterative physical optics(IPO) for radar cross section(RCS) by using two techniques effectively. For the analysis of the multiple reflection in the cavity, IPO uses the near field method, unlike shooting and bouncing rays method which uses the geometric optics(GO). However, it is still far slower than physical optics(PO) and it is needed to accelerate the speed of IPO for practical purpose. In order to address this problem, graphic processing unit(GPU) can be applied to reduce calculation time and adaptive iterative physical optics-change rate(AIPO-CR) method is also applicable effectively to optimize iteration for acceleration of calculation.

Fast Generation of Digital Hologram Based on Multi-GPU (Multi-GPU 기반의 고속 디지털 홀로그램 생성)

  • Song, Joong-Seok;Park, Jung-Sik;Seo, Young-Ho;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.1009-1017
    • /
    • 2011
  • Fast generation of digital hologram is of importance for real-time holography broadcasting. In this paper, we propose such a method that parallelizes the Computer-Generated Holography (CGH) algorithm for digital hologram generation and make it faster using Multi Graphic Processing Unit (Multi-GPU) with help of the Compute Unified Device Architecture (CUDA) and the Open Multi-Processing (OpenMP). In addition, we propose optimization methods such as fixation variable, vectorization, and loop unrolling for making the CGH algorithm much faster. Experimental results show that our method is about 9,700 times faster than a CPU-based one.

Maritime radar display unit based on PC for safe ship navigation

  • Bae, Jin-Ho;Lee, Chong-Hyun;Hwang, Chang-Ku
    • International Journal of Ocean System Engineering
    • /
    • v.1 no.1
    • /
    • pp.52-59
    • /
    • 2011
  • A prototype radar display unit was implemented using inexpensive off-the-shelf components, including a nonlinear estimation algorithm for the target tracking in a clutter environment. Two custom designed boards; an analog signal processing board and a DSP board, can be plugged into an expansion slot of a personal computer (PC) to form a maritime radar display unit. Our system provided all the functionality specified in the International Maritime Organization (IMO) resolution A422(XI). The analog signal processing board was used for A/D conversion as well as rain and sea clutter suppression. The main functions of the DSP board were scan conversion and video overlay operations. A host PC was used to run the tracking algorithm of targets in clutter, using the discrete-time Bayes optimal (nonlinear, and non-Gaussian) estimation method, and the graphic user interface (GUI) software for Automatic Radar Plotting Aid (ARPA). The proposed tracking method recursively found the entire probability density function of the target position and velocity by converting into linear convolution operations.

A Realization of CNN-based FPGA Chip for AI (Artificial Intelligence) Applications (합성곱 신경망 기반의 인공지능 FPGA 칩 구현)

  • Young Yun
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.388-389
    • /
    • 2022
  • Recently, AI (Artificial Intelligence) has been applied to various technologies such as automatic driving, robot and smart communication. Currently, AI system is developed by software-based method using tensor flow, and GPU (Graphic Processing Unit) is employed for processing unit. However, if software-based method employing GPU is used for AI applications, there is a problem that we can not change the internal circuit of processing unit. In this method, if high-level jobs are required for AI system, we need high-performance GPU, therefore, we have to change GPU or graphic card to perform the jobs. In this work, we developed a CNN-based FPGA (Field Programmable Gate Array) chip to solve this problem.

  • PDF

Optimizing Shared Memory Accesses for GPGPU Computations (GPGPU를 위한 공유 메모리 최적화)

  • Tran, Nhat-Phuong;Lee, Myungho;Hong, Sugwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.11a
    • /
    • pp.197-199
    • /
    • 2012
  • Recently, a lot of general-purpose application programs in addition to graphic applications have been parallelized for boosting their performance using Graphic Processing Unit (GPU)'s excellent floating-point performance. In order to maximize the application performance on GPUs, optimizing the memory hierarchy and the on-chip caches such as the shared memory is essential. In this paper, we propose techniques to optimize the shared memory, and verify its effectiveness using a pattern matching application program.

Introduction to general purpose GPU computing (GPU를 이용한 범용 계산의 소개)

  • Yu, Donghyeon;Lim, Johan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.5
    • /
    • pp.1043-1061
    • /
    • 2013
  • Recent advances in computer technology introduce massive data and their analysis becomes important. The high performance computing is one of the most essential part in analysis of massive data. In this paper, we review the general purpose of the graphics processing unit and its application to parallel computing, which has been of great interest in statistics communities.

A Realization of FPGA-based Image Recognition System (FPGA기반 영상인식 시스템 구현)

  • Young Yun
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2022.11a
    • /
    • pp.349-350
    • /
    • 2022
  • Recently, AI (Artificial Intelligence) has been applied to various technologies such as automatic driving, robot and smart communication. Currently, AI system is developed by software-based method using tensor flow, and GPU (Graphic Processing Unit) is employed for processing unit. In this work, we developed an FPGA-based (Field Programmable Gate Array) AI system , and report on image recognition system to realize the AI system.

  • PDF

Acceleration for Removing Sea-fog using Graphic Processors and Parallel Processing (그래픽 프로세서를 이용한 병렬연산 기반 해무 제거 고속화)

  • Kim, Young-doo;Kwak, Jae-min;Seo, Young-ho;Choi, Hyun-jun
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.5
    • /
    • pp.485-490
    • /
    • 2017
  • In this paper, we propose a technique for high speed removal of sea-fog using a graphic processor. This technique uses a host processor(CPU) and several graphics processors(GPU) capable of parallel processing to remove sea-fog from the input image. In the process of removing sea-fog, the dark channel extraction, the maximum brightness channel extraction, and the calculation of the transmission are performed by the host processor, and the process of refining the transmission by applying the bidirectional filter is performed in parallel through the graphic processor. To verify the proposed parallel processing method, three NVIDIA GTX 1070 GPUs were used to construct the verification environment. As a result, it takes about 140ms when implemented with one graphics processor, and 26ms when implemented using OpenMP and multiple GPGPUs. The proposed a parallel processing algorithm based on the graphics processor unit can be used for safe navigation, port control and monitoring system.

Real-Time Object Segmentation in Image Sequences (연속 영상 기반 실시간 객체 분할)

  • Kang, Eui-Seon;Yoo, Seung-Hun
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.173-180
    • /
    • 2011
  • This paper shows an approach for real-time object segmentation on GPU (Graphics Processing Unit) using CUDA (Compute Unified Device Architecture). Recently, many applications that is monitoring system, motion analysis, object tracking or etc require real-time processing. It is not suitable for object segmentation to procedure real-time in CPU. NVIDIA provide CUDA platform for Parallel Processing for General Computation to upgrade limit of Hardware Graphic. In this paper, we use adaptive Gaussian Mixture Background Modeling in the step of object extraction and CCL(Connected Component Labeling) for classification. The speed of GPU and CPU is compared and evaluated with implementation in Core2 Quad processor with 2.4GHz.The GPU version achieved a speedup of 3x-4x over the CPU version.

Control Unit Design and Implementation for SIMD Programmable Unified Shader (SIMD 프로그래머블 통합 셰이더를 위한 제어 유닛 설계 및 구현)

  • Kim, Kyeong-Seob;Lee, Yun-Sub;Yu, Byung-Cheol;Jung, Jin-Ha;Choi, Sang-Bang
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.7
    • /
    • pp.37-47
    • /
    • 2011
  • Real picture like high quality computer graphic is widely used in various fields and shader processor, a key part of a graphic processor, has been advanced to programmable unified shader. However, The existing graphic processors have been optimized to commercial algorithms, so development of an algorithm which is not based on it requires an independent shader processor. In this paper, we have designed and implemented a control unit to support high quality 3 dimensional computer graphic image on programmable integrated shader processor. We have done evaluation through functional level simulation of designed control unit. Hardware resource usage rate are measured by implementing directly on FPGA Virtex-4 and execution speed are verified by applying ASIC library. the result of an evaluation shows that the control unit has the commands more about 1.5 times compared to the other shader processors that is a behavior similar to the control unit and with a number of processing units used in a shader processor, compared with the other processors, overall performance of the control unit is improved about 3.1 GFLOPS.