• Title/Summary/Keyword: NVIDIA

Search Result 163, Processing Time 0.03 seconds

Fast View Synthesis Using GPGPU (GPGPU를 이용한 고속 영상 합성 기법)

  • Shin, Hong-Chang;Park, Han-Hoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.859-874
    • /
    • 2008
  • In this paper, we develop a fast view synthesis method that generates multiple intermediate views in real-time for the 3D display system when the camera geometry and depth map of reference views are given in advance. The proposed method achieves faster view synthesis than previous approaches in GPU by processing in parallel the entire computations required for the view synthesis. Specifically, we use $CUDA^{TM}$ (by NVIDIA) to control GPU device. For increasing the processing speed, we adapted all the processes for the view synthesis to single instruction multiple data (SIMD) structure that is a main feature of CUDA, maximized the use of the high-speed memories on GPU device, and optimized the implementation. As a result, we could synthesize 9 intermediate view images with the size of 720 by 480 pixels within 0.128 second.

Implementation of Viterbi Decoder on Massively Parallel GPU for DVB-T Receiver (DVB-T 수신기를 위한 대규모 병렬처리 GPU 기반의 비터비 복호기 구현)

  • Lee, KyuHyung;Lee, Ho-Kyoung;Heo, Seo Weon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.3-11
    • /
    • 2013
  • Recently, a plenty of researches have been conducted using the massively parallel processing of GPU for the implementation of communication system. In this paper, we tried to reduce software simulation time applying GPU with sliding block method to Viterbi decoder in DVB-T system which is one of European DTV standards. First of all, we implement DVB-T system by CPU and estimate cost time whereby the system processes one OFDM symbol. Secondly, we implement Viterbi decoder by software using NVIDIA's massive GPU processor. In our work, stream process method is applied to reduce the overhead for data transfer between CPU and GPU, as well as coalescing method to lower the global memory access time. In addition, data structure design method is used to maximize the shared memory usage. Consequently, our proposed method is approximately 11 times faster in 2K mode and 60 times faster in 8K mode for the process in Viterbi decoder.

Acceleration for Removing Sea-fog using Graphic Processors and Parallel Processing (그래픽 프로세서를 이용한 병렬연산 기반 해무 제거 고속화)

  • Kim, Young-doo;Kwak, Jae-min;Seo, Young-ho;Choi, Hyun-jun
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.5
    • /
    • pp.485-490
    • /
    • 2017
  • In this paper, we propose a technique for high speed removal of sea-fog using a graphic processor. This technique uses a host processor(CPU) and several graphics processors(GPU) capable of parallel processing to remove sea-fog from the input image. In the process of removing sea-fog, the dark channel extraction, the maximum brightness channel extraction, and the calculation of the transmission are performed by the host processor, and the process of refining the transmission by applying the bidirectional filter is performed in parallel through the graphic processor. To verify the proposed parallel processing method, three NVIDIA GTX 1070 GPUs were used to construct the verification environment. As a result, it takes about 140ms when implemented with one graphics processor, and 26ms when implemented using OpenMP and multiple GPGPUs. The proposed a parallel processing algorithm based on the graphics processor unit can be used for safe navigation, port control and monitoring system.

A Deep Neural Network Architecture for Real-Time Semantic Segmentation on Embedded Board (임베디드 보드에서 실시간 의미론적 분할을 위한 심층 신경망 구조)

  • Lee, Junyeop;Lee, Youngwan
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.94-98
    • /
    • 2018
  • We propose Wide Inception ResNet (WIR Net) an optimized neural network architecture as a real-time semantic segmentation method for autonomous driving. The neural network architecture consists of an encoder that extracts features by applying a residual connection and inception module, and a decoder that increases the resolution by using transposed convolution and a low layer feature map. We also improved the performance by applying an ELU activation function and optimized the neural network by reducing the number of layers and increasing the number of filters. The performance evaluations used an NVIDIA Geforce GTX 1080 and TX1 boards to assess the class and category IoU for cityscapes data in the driving environment. The experimental results show that the accuracy of class IoU 53.4, category IoU 81.8 and the execution speed of $640{\times}360$, $720{\times}480$ resolution image processing 17.8fps and 13.0fps on TX1 board.

Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera (영상정보만을 이용한 사람과 로봇간 실시간 상대위치 추정 알고리즘)

  • Lee, Jung Uk;Sun, Ju Young;Won, Mooncheol
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.12
    • /
    • pp.1445-1452
    • /
    • 2013
  • In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ~ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

GP-GPU based Parallelization for Urban Terrain Atmospheric Model CFD_NIMR (도시기상모델 CFD_NIMR의 GP-GPU 실행을 위한 병렬 프로그램의 구현)

  • Kim, Youngtae;Park, Hyeja;Choi, Young-Jeen
    • Journal of Internet Computing and Services
    • /
    • v.15 no.2
    • /
    • pp.41-47
    • /
    • 2014
  • In this paper, we implemented a CUDA Fortran parallel program to run the CFD_NIMR model on GP-GPU's, which simulates air diffusion on urban terrains. A GP-GPU is graphic processing unit in the form of a PCI card, and a general calculation accelerator to perform a large amount of high speed calculations with low cost and electric power. The GP-GPU gives performance enhancement of speed by 15 times to compare the Nvidia Tesla C1060 GPU with Intel XEON 2.0 GHz CPU. In addition, the program on a GP-GPU shows efficient performance compared to an MPI parallel program on multiple CPU's. It is expected that a proposed programming method on the GP-GPU parallel program can be used for numerical models with a similar structure.

Design of a High-Performance Mobile GPGPU with SIMT Architecture based on a Small-size Warp Scheduler (작은 크기의 Warp 스케쥴러 기반 SIMT구조 고성능 모바일 GPGPU 설계)

  • Lee, Kwang-Yeob
    • Journal of IKEEE
    • /
    • v.25 no.3
    • /
    • pp.479-484
    • /
    • 2021
  • This paper proposed and designed a structure to achieve high performance with a small number of cores in GPGPU with SIMT structure. GPGPU for application to mobile devices requires a structure to increase performance compared to power consumption. In order to reduce power consumption, the number of cores decreased, but to improve performance, the size of the warp scheduler for managing threads was set to 4, which was greatly reduced than 32 of general GPGPU. Reducing warp size can reduce the number of idle cycles in pipelines and efficiently apply memory latency to reduce miss penalty when accessing cache memory. The designed GPGPU measured computational performance using a test program that includes floating point operations and measured power consumption through a 28nm CMOS process to obtain 104.5GFlops/Watt as a performance per power. The results of this paper showed about four times better performance per power compared to Tegra K1 of Nvidia

Performance Comparison of Join Operations Parallelization by using GPGPU (GPGPU 기반 조인 연산 병렬화 성능 비교)

  • Lee, Jong-Sub;Lee, Sang-Back;Lee, Kyu-Chul
    • Database Research
    • /
    • v.34 no.3
    • /
    • pp.28-44
    • /
    • 2018
  • In a database system, the most expensive operation among relational operations is a join operation. Generally, CPU-based join operations uses parallel processing with either 1 core or 16 cores at most, which does not significantly improve the function. On the other hand, GPGPU(General-Purpose computing on Graphics Processing Units) allows parallel processing through thousands of processing units, greatly reducing the time required to perform join operations. Parallelization of the operation using GPGPU uses NVIDIA's CUDA SDK. In this paper, we implement parallelization of the join operation using GPGPU and compare the performances. The used join operations are Nested Loop Join (NLJ), Sort Merge Join (SMJ) and Hash Join (HJ), and GPGPU equipment uses TITAN Xp, GTX 1080 Ti and GTX 1080. We measure and compare the performance of join operations based on CPU and GPGPU. We compare this performance with the performance of the previous study on the join operation based on GPGPU. The results of experiment show that the performance based on GPGPU is 6~328 times faster than the one based on CPU.

Speed-optimized Implementation of HIGHT Block Cipher Algorithm (HIGHT 블록 암호 알고리즘의 고속화 구현)

  • Baek, Eun-Tae;Lee, Mun-Kyu
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.3
    • /
    • pp.495-504
    • /
    • 2012
  • This paper presents various speed optimization techniques for software implementation of the HIGHT block cipher on CPUs and GPUs. We considered 32-bit and 64-bit operating systems for CPU implementations. After we applied the bit-slicing and byte-slicing techniques to HIGHT, the encryption speed recorded 1.48Gbps over the intel core i7 920 CPU with a 64-bit operating system, which is up to 2.4 times faster than the previous implementation. We also implemented HIGHT on an NVIDIA GPU equipped with CUDA, and applied various optimization techniques, such as storing most frequently used data like subkeys and the F lookup table in the shared memory; and using coalesced access when reading data from the global memory. To our knowledge, this is the first result that implements and optimizes HIGHT on a GPU. We verified that the byte-slicing technique guarantees a speed-up of more than 20%, resulting a speed which is 31 times faster than that on a CPU.

Design and Implementation of A Smart Crosswalk System based on Vehicle Detection and Speed Estimation using Deep Learning on Edge Devices (엣지 디바이스에서의 딥러닝 기반 차량 인식 및 속도 추정을 통한 스마트 횡단보도 시스템의 설계 및 구현)

  • Jang, Sun-Hye;Cho, Hee-Eun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.467-473
    • /
    • 2020
  • Recently, the number of traffic accidents has also increased with the increase in the penetration rate of cars in Korea. In particular, not only inter-vehicle accidents but also human accidents near crosswalks are increasing, so that more attention to traffic safety around crosswalks are required. In this paper, we propose a system for predicting the safety level around the crosswalk by recognizing an approaching vehicle and estimating the speed of the vehicle using NVIDIA Jetson Nano-class edge devices. To this end, various machine learning models are trained with the information obtained from deep learning-based vehicle detection to predict the degree of risk according to the speed of an approaching vehicle. Finally, based on experiments using actual driving images and web simulation, the performance and the feasibility of the proposed system are validated.