• Title/Summary/Keyword: NVIDIA

Search Result 163, Processing Time 0.03 seconds

A Study on the Improvement of YOLOv7 Inference Speed in Jetson Embedded Platform (Jetson 임베디드 플랫폼에서의 YOLOv7 추론 속도 개선에 관한 연구)

  • Bo-Chan Kang;Dong-Young Yoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.154-155
    • /
    • 2023
  • 오픈 소스인 YOLO(You Only Look Once) 객체 탐지 알고리즘이 공개된 이후, 산업 현장에서는 고성능 컴퓨터에서 벗어나 효율과 특수한 환경에 사용하기 위해 임베디드 시스템에 도입하고 있다. 그러나, NVIDIA의 Jetson nano의 경우, Pytorch의 YOLOv7 딥러닝 모델에 대한 추론이 진행되지 않는다. 따라서 제한적인 전력과 메모리, 연산능력 최적화 과정은 필수적이다. 본 논문은 NVIDIA의 임베디드 플랫폼 Jetson 계열의 Xavier NX, Orin AGX, Nano에서 딥러닝 모델을 적용하기 위한 최적화 과정과 플랫폼에서 다양한 크기의 YOLOv7의 PyTorch 모델들을 Tensor RT로 변환하여 FPS(Frames Per Second)를 측정 및 비교한다. 측정 결과를 통해, 각 임베디드 플랫폼에서 YOLOv7 모델의 추론은 Tensor RT는 Pytorch에서 약 4.1배 적은 FPS 변동성과 약 2.25배 정도의 FPS 속도향상을 보였다.

Design and Implementation of a Framework for Collaboration Systems in the Shipbuilding and Marine Industry (조선해양 설계분야에서 협업시스템을 위한 프레임워크의 설계 및 구현)

  • Yun, Moon-Kyeong;Kim, Hyun-Ju;Park, Min-Gil;Han, Myeong-Ki;Kim, Wan-Kyoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.270-273
    • /
    • 2015
  • In shipbuilding and marine industry, engineering and design software solutions have upgraded from the original 2D schematic data based CAD system to a modern 3D drawing-based system. Due to the fact that the massive amount of data usage in real time and data volumes of various engineering models including graphic data have increased, several problems such as lack of server resources and improper handling of 3D drawings have been raised. Besides, increasing the number of session connections per server can cause deterioration of server performance. Recently, increasing the yard's sophisticated design capabilities highlighted the need to develop engineering and design system which would not only overcome the network performance issues, but would provide efficient collaborative design environment. This paper presents an overview of the framework for collaborative engineering design system based on the virtual application (Citrix XenApp 6.5)and acceleration hardware technology of 3D graphics (NVIDIA GRID K2 solution).

  • PDF

Implementation of Pedestrian Detection and Tracking with GPU at Night-time (GPU를 이용한 야간 보행자 검출과 추적 시스템 구현)

  • Choi, Beom-Joon;Yoon, Byung-Woo;Song, Jong-Kwan;Park, Jangsik
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.421-429
    • /
    • 2015
  • This paper is about an approach for pedestrian detection and tracking with infrared imagery. We used the CUDA(Computer Unified Device Architecture) that is a parallel processing language in order to improve the speed of video-based pedestrian detection and tracking. The detection phase is performed by Adaboost algorithm based on Haar-like features. Adaboost classifier is trained with datasets generated from infrared images. After detecting the pedestrian with the Adaboost classifier, we proposed a particle filter tracking strategies on HSV histogram feature that exploit adaptively at the same time. The proposed approach is implemented on an NVIDIA Jetson TK1 developer board that is full-featured device ideal for software development within the Linux environment. In this paper, we presented the results of parallel processing with the NVIDIA GPU on the CUDA development environment for detection and tracking of pedestrians. We compared the object detection and tracking processing time for night-time images on both GPU and CPU. The result showed that the detection and tracking speed of the pedestrian with GPU is approximately 6 times faster than that for CPU.

High-Speed Implementations of Block Ciphers on Graphics Processing Units Using CUDA Library (GPU용 연산 라이브러리 CUDA를 이용한 블록암호 고속 구현)

  • Yeom, Yong-Jin;Cho, Yong-Kuk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.3
    • /
    • pp.23-32
    • /
    • 2008
  • The computing power of graphics processing units(GPU) has already surpassed that of CPU and the gap between their powers is getting wider. Thus, research on GPGPU which applies GPU to general purpose becomes popular and shows great success especially in the field of parallel data processing. Since the implementation of cryptographic algorithm using GPU was started by Cook et at. in 2005, improved results using graphic libraries such as OpenGL and DirectX have been published. In this paper, we present skills and results of implementing block ciphers using CUDA library announced by NVIDIA in 2007. Also, we discuss a general method converting source codes of block ciphers on CPU to those on GPU. On NVIDIA 8800GTX GPU, the resulting speeds of block cipher AES, ARIA, and DES are 4.5Gbps, 7.0Gbps, and 2.8Gbps, respectively which are faster than the those on CPU.

YOLO Model FPS Enhancement Method for Determining Human Facial Expression based on NVIDIA Jetson TX1 (NVIDIA Jetson TX1 기반의 사람 표정 판별을 위한 YOLO 모델 FPS 향상 방법)

  • Bae, Seung-Ju;Choi, Hyeon-Jun;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.5
    • /
    • pp.467-474
    • /
    • 2019
  • In this paper, we propose a novel method to improve FPS while maintaining the accuracy of YOLO v2 model in NVIDIA Jetson TX1. In general, in order to reduce the amount of computation, a conversion to an integer operation or reducing the depth of a network have been used. However, the accuracy of recognition can be deteriorated. So, we use methods to reduce computation and memory consumption through adjustment of the filter size and integrated computation of the network The first method is to replace the $3{\times}3$ filter with a $1{\times}1$ filter, which reduces the number of parameters to one-ninth. The second method is to reduce the amount of computation through CBR (Convolution-Add Bias-Relu) among the inference acceleration functions of TensorRT, and the last method is to reduce memory consumption by integrating repeated layers using TensorRT. For the simulation results, although the accuracy is decreased by 1% compared to the existing YOLO v2 model, the FPS has been improved from the existing 3.9 FPS to 11 FPS.

Implementation of high performance parallel LU factorization program for multi-threads on GPGPUs (GPGPU의 멀티 쓰레드를 활용한 고성능 병렬 LU 분해 프로그램의 구현)

  • Shin, Bong-Hi;Kim, Young-Tae
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.131-137
    • /
    • 2011
  • GPUs were originally designed for graphic processing, and GPGPUs are general-purpose GPUs for numerical computation with high performance and low electric power. In this paper, we implemented the parallel LU factorization program for GPGPUs. In CUDA, which is computational environment for Nvidia GPGPUs, domains are divided into blocks, and multi-threads compute each sub-blocks Simultaneously. In LU factorization program, computation order should be artificially decided due to the data dependence. To resolve the data dependancy, we suggested a parallel LU program for GPGPUs, and also explained parallel reduction algorithm for partial pivoting of LU factorization. We finally present performance analysis to show efficiency of the parallel LU factorization program based on multi-threads on GPGPUs.

Acceleration of Phase Measuring Profilometry using GPU (GPU를 이용한 위상 측정법의 가속화)

  • Kim, Ho-Joong;Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.12
    • /
    • pp.2285-2290
    • /
    • 2017
  • Automation systems are evolving in many areas of industry in recent years. At the same time, the necessity of the height inspection of the object by the 3D measurement is gradually increasing. Among the various 3D measurement methods, this paper discusses phase measuring profilometry(PMP). The PMP is a method of obtaining the height of an object using the phase value of the fringe pattern. Since the PMP is an algorithm requiring a large amount of computation, a method for efficiently solving the problem is needed. In this paper, we propose to use CUDA from NVIDIA to solve this problem. We also propose using pinned memory and streams provided by CUDA. This can greatly improve the measurement speed while maintaining accuracy. Finally, we demonstrate the performance of the proposed method through experiments.

A study on game physics engine focused on real time physics (물리 엔진에 관한 고찰 : 실시간 물리 기술을 중심으로)

  • Ha, You-Jong;Park, Kyoung-Ju
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.43-52
    • /
    • 2009
  • This paper analyzes the four game physics engines in terms of real time techniques. Real time physics is the technology that simplifies the physics-based simulation to apply for the real time applications such as game. Our study includes two commercial physics engines, Havok's Physics SDK and NVIDIA's PhysX SDK, and two open source projects, Open Dynamics Engine and Bullet physics engine. As a result, most of them covers rigid body dynamics and some include either deformable body simulation or fluids simulation, or both. For real time simulation, they adopt the simplified numerical methods, the effective in collision detection/response, and also use the parallel processing hardwares, i.e., multi core CPU, Physics processing unit(PPU), or graphics processing unit(GPU).

  • PDF

GPU based Fast Recognition of Artificial Landmark for Mobile Robot (주행로봇을 위한 GPU 기반의 고속 인공표식 인식)

  • Kwon, Oh-Sung;Kim, Young-Kyun;Cho, Young-Wan;Seo, Ki-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.688-693
    • /
    • 2010
  • Vision based object recognition in mobile robots has many issues for image analysis problems with neighboring elements in dynamic environments. SURF(Speeded Up Robust Features) is the local feature extraction method of the image and its performance is constant even if disturbances, such as lighting, scale change and rotation, exist. However, it has a difficulty of real-time processing caused by representation of high dimensional vectors. To solve th problem, execution of SURF in GPU(Graphics Processing Unit) is proposed and implemented using CUDA of NVIDIA. Comparisons of recognition rates and processing time for SURF between CPU and GPU by variation of robot velocity and image sizes is experimented.

Implementation of Stereo Matching Algorithm using GPU (GPU를 이용한 스테레오 정합 알고리즘의 구현)

  • Choi, Hyun-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.3
    • /
    • pp.583-588
    • /
    • 2011
  • In this paper, we propose an adaptive variable-sized matching window method using the characteristic points of the image and a method to increase the reliability of the cross-consistency check to raise the correctness of the final disparity image. The proposed adaptive variable-sized window method segments the image with the color information, finds the characteristic points inside the window. Also the proposed algorithm implement using a graphic processing unit(GPU). The GPU, we used in this paper is GeForce GTX296 (NVIDIA) and we can use programming based on CUDA. The calculation speed realizes a speed approximately 128 times faster than that of a CPU.