• Title/Summary/Keyword: YOLOv7

Search Result 45, Processing Time 0.03 seconds

YOLOv7 Model Inference Time Complexity Analysis in Different Computing Environments (다양한 컴퓨팅 환경에서 YOLOv7 모델의 추론 시간 복잡도 분석)

  • Park, Chun-Su
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.7-11
    • /
    • 2022
  • Object detection technology is one of the main research topics in the field of computer vision and has established itself as an essential base technology for implementing various vision systems. Recent DNN (Deep Neural Networks)-based algorithms achieve much higher recognition accuracy than traditional algorithms. However, it is well-known that the DNN model inference operation requires a relatively high computational power. In this paper, we analyze the inference time complexity of the state-of-the-art object detection architecture Yolov7 in various environments. Specifically, we compare and analyze the time complexity of four types of the Yolov7 model, YOLOv7-tiny, YOLOv7, YOLOv7-X, and YOLOv7-E6 when performing inference operations using CPU and GPU. Furthermore, we analyze the time complexity variation when inferring the same models using the Pytorch framework and the Onnxruntime engine.

Evaluating Chest Abnormalities Detection: YOLOv7 and Detection Transformer with CycleGAN Data Augmentation

  • Yoshua Kaleb Purwanto;Suk-Ho Lee;Dae-Ki Kang
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.195-204
    • /
    • 2024
  • In this paper, we investigate the comparative performance of two leading object detection architectures, YOLOv7 and Detection Transformer (DETR), across varying levels of data augmentation using CycleGAN. Our experiments focus on chest scan images within the context of biomedical informatics, specifically targeting the detection of abnormalities. The study reveals that YOLOv7 consistently outperforms DETR across all levels of augmented data, maintaining better performance even with 75% augmented data. Additionally, YOLOv7 demonstrates significantly faster convergence, requiring approximately 30 epochs compared to DETR's 300 epochs. These findings underscore the superiority of YOLOv7 for object detection tasks, especially in scenarios with limited data and when rapid convergence is essential. Our results provide valuable insights for researchers and practitioners in the field of computer vision, highlighting the effectiveness of YOLOv7 and the importance of data augmentation in improving model performance and efficiency.

Evaluation of Robustness of Deep Learning-Based Object Detection Models for Invertebrate Grazers Detection and Monitoring (조식동물 탐지 및 모니터링을 위한 딥러닝 기반 객체 탐지 모델의 강인성 평가)

  • Suho Bak;Heung-Min Kim;Tak-Young Kim;Jae-Young Lim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.297-309
    • /
    • 2023
  • The degradation of coastal ecosystems and fishery environments is accelerating due to the recent phenomenon of invertebrate grazers. To effectively monitor and implement preventive measures for this phenomenon, the adoption of remote sensing-based monitoring technology for extensive maritime areas is imperative. In this study, we compared and analyzed the robustness of deep learning-based object detection modelsfor detecting and monitoring invertebrate grazersfrom underwater videos. We constructed an image dataset targeting seven representative species of invertebrate grazers in the coastal waters of South Korea and trained deep learning-based object detection models, You Only Look Once (YOLO)v7 and YOLOv8, using this dataset. We evaluated the detection performance and speed of a total of six YOLO models (YOLOv7, YOLOv7x, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) and conducted robustness evaluations considering various image distortions that may occur during underwater filming. The evaluation results showed that the YOLOv8 models demonstrated higher detection speed (approximately 71 to 141 FPS [frame per second]) compared to the number of parameters. In terms of detection performance, the YOLOv8 models (mean average precision [mAP] 0.848 to 0.882) exhibited better performance than the YOLOv7 models (mAP 0.847 to 0.850). Regarding model robustness, it was observed that the YOLOv7 models were more robust to shape distortions, while the YOLOv8 models were relatively more robust to color distortions. Therefore, considering that shape distortions occur less frequently in underwater video recordings while color distortions are more frequent in coastal areas, it can be concluded that utilizing YOLOv8 models is a valid choice for invertebrate grazer detection and monitoring in coastal waters.

Real time 2D/3D Object Detection on Edge Computing for Mobile Robot (모바일 로봇을 위한 엣지 컴퓨팅에서의 실시간 2D/3D 객체인식)

  • Jae-Young Kim;Hyungpil Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1161-1162
    • /
    • 2023
  • 모바일 로봇의 자율주행을 위하여 인터넷이 제약된 환경에서도 가능한 Edge computing 에서의 Object Detection 이 필수적이다. 본 논문에서는 이를 위해 Orin 보드에서 YOLOv7 과 Complex_YOLOv4 를 구현하였다. 직접 취득한 데이터를 통해 YOLOv7 을 구현한 결과 0.56 의 mAP 로 프레임당 133ms 가 소요되었다. Kitti Dataset 을 통해 Complex_YOLOv4 를 구현한 결과 0.88 의 mAP 로 프레임당 236ms 가 소요되었다. Comple_YOLOv4 가 YOLOv7 보다 더 많은 데이터를 예측하기에 시간은 더 소요되지만 높은 정확성을 가지는 것을 확인할 수 있었다.

A Study on the Improvement of YOLOv7 Inference Speed in Jetson Embedded Platform (Jetson 임베디드 플랫폼에서의 YOLOv7 추론 속도 개선에 관한 연구)

  • Bo-Chan Kang;Dong-Young Yoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.154-155
    • /
    • 2023
  • 오픈 소스인 YOLO(You Only Look Once) 객체 탐지 알고리즘이 공개된 이후, 산업 현장에서는 고성능 컴퓨터에서 벗어나 효율과 특수한 환경에 사용하기 위해 임베디드 시스템에 도입하고 있다. 그러나, NVIDIA의 Jetson nano의 경우, Pytorch의 YOLOv7 딥러닝 모델에 대한 추론이 진행되지 않는다. 따라서 제한적인 전력과 메모리, 연산능력 최적화 과정은 필수적이다. 본 논문은 NVIDIA의 임베디드 플랫폼 Jetson 계열의 Xavier NX, Orin AGX, Nano에서 딥러닝 모델을 적용하기 위한 최적화 과정과 플랫폼에서 다양한 크기의 YOLOv7의 PyTorch 모델들을 Tensor RT로 변환하여 FPS(Frames Per Second)를 측정 및 비교한다. 측정 결과를 통해, 각 임베디드 플랫폼에서 YOLOv7 모델의 추론은 Tensor RT는 Pytorch에서 약 4.1배 적은 FPS 변동성과 약 2.25배 정도의 FPS 속도향상을 보였다.

Detection and classification of Bulky Waste based on YOLOv7 algorithm (YOLOv7 알고리즘 기반 대형폐기물 검출 및 분류)

  • Siung Kim;Junhyeok Go;Jeonghyeon Park;Nammee Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1215-1217
    • /
    • 2023
  • 가정에서 대형 폐기물을 배출하고 수거하는 과정에서 폐기물을 수동적으로 분류를 하는 것은 시간이 많이 소요되는 작업이다. 본 논문에서는 YOLOv4, 5, 7 모델을 비교하여 실생활에 사용가능한 대형 폐기물 탐지에 가장 적합한 모델을 찾는다. 이미지 증강 전 결과는 YOLOv7이 가장 좋은 성능을 보였다. 배출자가 촬영하는 각도나 위치, 시간 등의 변수를 고려하고자 증강을 시도하였고 증강 후 탐지 결과도 YOLOv7이 F1-score 93 %, mAP 96.6% 로 다른 모델보다 전체적으로 더 좋은 성능을 보였다.

A Performance Comparison of Land-Based Floating Debris Detection Based on Deep Learning and Its Field Applications (딥러닝 기반 육상기인 부유쓰레기 탐지 모델 성능 비교 및 현장 적용성 평가)

  • Suho Bak;Seon Woong Jang;Heung-Min Kim;Tak-Young Kim;Geon Hui Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.193-205
    • /
    • 2023
  • A large amount of floating debris from land-based sources during heavy rainfall has negative social, economic, and environmental impacts, but there is a lack of monitoring systems for floating debris accumulation areas and amounts. With the recent development of artificial intelligence technology, there is a need to quickly and efficiently study large areas of water systems using drone imagery and deep learning-based object detection models. In this study, we acquired various images as well as drone images and trained with You Only Look Once (YOLO)v5s and the recently developed YOLO7 and YOLOv8s to compare the performance of each model to propose an efficient detection technique for land-based floating debris. The qualitative performance evaluation of each model showed that all three models are good at detecting floating debris under normal circumstances, but the YOLOv8s model missed or duplicated objects when the image was overexposed or the water surface was highly reflective of sunlight. The quantitative performance evaluation showed that YOLOv7 had the best performance with a mean Average Precision (intersection over union, IoU 0.5) of 0.940, which was better than YOLOv5s (0.922) and YOLOv8s (0.922). As a result of generating distortion in the color and high-frequency components to compare the performance of models according to data quality, the performance degradation of the YOLOv8s model was the most obvious, and the YOLOv7 model showed the lowest performance degradation. This study confirms that the YOLOv7 model is more robust than the YOLOv5s and YOLOv8s models in detecting land-based floating debris. The deep learning-based floating debris detection technique proposed in this study can identify the spatial distribution of floating debris by category, which can contribute to the planning of future cleanup work.

Influence of Self-driving Data Set Partition on Detection Performance Using YOLOv4 Network (YOLOv4 네트워크를 이용한 자동운전 데이터 분할이 검출성능에 미치는 영향)

  • Wang, Xufei;Chen, Le;Li, Qiutan;Son, Jinku;Ding, Xilong;Song, Jeongyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.157-165
    • /
    • 2020
  • Aiming at the development of neural network and self-driving data set, it is also an idea to improve the performance of network model to detect moving objects by dividing the data set. In Darknet network framework, the YOLOv4 (You Only Look Once v4) network model was used to train and test Udacity data set. According to 7 proportions of the Udacity data set, it was divided into three subsets including training set, validation set and test set. K-means++ algorithm was used to conduct dimensional clustering of object boxes in 7 groups. By adjusting the super parameters of YOLOv4 network for training, Optimal model parameters for 7 groups were obtained respectively. These model parameters were used to detect and compare 7 test sets respectively. The experimental results showed that YOLOv4 can effectively detect the large, medium and small moving objects represented by Truck, Car and Pedestrian in the Udacity data set. When the ratio of training set, validation set and test set is 7:1.5:1.5, the optimal model parameters of the YOLOv4 have highest detection performance. The values show mAP50 reaching 80.89%, mAP75 reaching 47.08%, and the detection speed reaching 10.56 FPS.

Development of Electronic Device Recognition Model using YOLOv7 (YOLOv7을 이용한 전자소자 인식 모델 개발)

  • Lee, Hee-Won;Ahn, Sangtae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.396-398
    • /
    • 2022
  • 회로설계 실험에서 사용되는 전자소자들은 실험이 종료된 후 다시 분류하여야 하는데, 소자가 작고 종류가 많아 분류에 많은 시간이 필요하게 된다. 본 논문에서는 이를 해결하기 위해 YOLOv7을 활용한 소자 인식 모델을 개발하여, 실험실 환경 유지를 위한 불필요한 인력 낭비를 줄이는데 도움을 주고자 한다.

YOLOv7-based recyclable PET classification system (YOLOv7 기반 순환 가능한 PET 분류시스템)

  • Kim, MinSeung;Lee, SoYeon;Bae, MinJi;Yoon, Tae Jun;Kim, Dae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.495-497
    • /
    • 2022
  • COVID-19 상황이 지속됨에 따라 플라스틱 쓰레기 배출량은 해마다 기하급수적으로 증가하고 있는 반면 플라스틱 폐기물의 재활용률은 현저히 낮은 편에 속한다. 이러한 문제점들을 해결하기 위해 국가적으로 여러 플라스틱 폐기물 중 순환 가능한 PET를 분리하여 수거하고자 하는 노력을 하고 있다. 하지만, 현재 대량의 플라스틱 폐기물은 수거되는 시점부터 여러 폐기물과 혼합된 형태로 재활용 센터에 수거되어 추가 분류하는 인적자원이 요구되는 문제점이 존재한다. 따라서 본 논문에서는 이러한 한계점들을 해결하기 위해 AI 기술 중 하나인 Multi-Object Detection의 YOLOv7 모델을 적용하여 실시간으로 PET에 부착된 객체들을 탐지함으로써 순환 가능한 PET만을 분류하는 YOLOv7 기반 순환 가능한 PET 분류시스템을 설계 및 구현한다.