• Title/Summary/Keyword: YOLOv2

Search Result 75, Processing Time 0.024 seconds

The Study of Car Detection on the Highway using YOLOv2 and UAVs (YOLOv2와 무인항공기를 이용한 자동차 탐지에 관한 연구)

  • Seo, Chang-Jin
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.67 no.1
    • /
    • pp.42-46
    • /
    • 2018
  • In this paper, we propose fast object detection method of the cars by applying YOLOv2(You Only Look Once version 2) and UAVs (Unmanned Aerial Vehicles) while on the highway. We operated Darknet, OpenCV, CUDA and Deep Learning Server(SDX-4185) for our simulation environment. YOLOv2 is recently developed fast object detection algorithm that can detect various scale objects as fast speed. YOLOv2 convolution network algorithm allows to calculate probability by one pass evaluation and predicts location of each cars, because object detection process has simple single network. In our result, we could find cars on the highway area as fast speed and we could apply to the real time.

Fundamental Function Design of Real-Time Unmanned Monitoring System Applying YOLOv5s on NVIDIA TX2TM AI Edge Computing Platform

  • LEE, SI HYUN
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.22-29
    • /
    • 2022
  • In this paper, for the purpose of designing an real-time unmanned monitoring system, the YOLOv5s (small) object detection model was applied on the NVIDIA TX2TM AI (Artificial Intelligence) edge computing platform in order to design the fundamental function of an unmanned monitoring system that can detect objects in real time. YOLOv5s was applied to the our real-time unmanned monitoring system based on the performance evaluation of object detection algorithms (for example, R-CNN, SSD, RetinaNet, and YOLOv5). In addition, the performance of the four YOLOv5 models (small, medium, large, and xlarge) was compared and evaluated. Furthermore, based on these results, the YOLOv5s model suitable for the design purpose of this paper was ported to the NVIDIA TX2TM AI edge computing system and it was confirmed that it operates normally. The real-time unmanned monitoring system designed as a result of the research can be applied to various application fields such as an security or monitoring system. Future research is to apply NMS (Non-Maximum Suppression) modification, model reconstruction, and parallel processing programming techniques using CUDA (Compute Unified Device Architecture) for the improvement of object detection speed and performance.

Real time 2D/3D Object Detection on Edge Computing for Mobile Robot (모바일 로봇을 위한 엣지 컴퓨팅에서의 실시간 2D/3D 객체인식)

  • Jae-Young Kim;Hyungpil Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1161-1162
    • /
    • 2023
  • 모바일 로봇의 자율주행을 위하여 인터넷이 제약된 환경에서도 가능한 Edge computing 에서의 Object Detection 이 필수적이다. 본 논문에서는 이를 위해 Orin 보드에서 YOLOv7 과 Complex_YOLOv4 를 구현하였다. 직접 취득한 데이터를 통해 YOLOv7 을 구현한 결과 0.56 의 mAP 로 프레임당 133ms 가 소요되었다. Kitti Dataset 을 통해 Complex_YOLOv4 를 구현한 결과 0.88 의 mAP 로 프레임당 236ms 가 소요되었다. Comple_YOLOv4 가 YOLOv7 보다 더 많은 데이터를 예측하기에 시간은 더 소요되지만 높은 정확성을 가지는 것을 확인할 수 있었다.

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.

Object Size Prediction based on Statistics Adaptive Linear Regression for Object Detection (객체 검출을 위한 통계치 적응적인 선형 회귀 기반 객체 크기 예측)

  • Kwon, Yonghye;Lee, Jongseok;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.184-196
    • /
    • 2021
  • This paper proposes statistics adaptive linear regression-based object size prediction method for object detection. YOLOv2 and YOLOv3, which are typical deep learning-based object detection algorithms, designed the last layer of a network using statistics adaptive exponential regression model to predict the size of objects. However, an exponential regression model can propagate a high derivative of a loss function into all parameters in a network because of the property of an exponential function. We propose statistics adaptive linear regression layer to ease the gradient exploding problem of the exponential regression model. The proposed statistics adaptive linear regression model is used in the last layer of the network to predict the size of objects with statistics estimated from training dataset. We newly designed the network based on the YOLOv3tiny and it shows the higher performance compared to YOLOv3 tiny on the UFPR-ALPR dataset.

Development of YOLOv5s and DeepSORT Mixed Neural Network to Improve Fire Detection Performance

  • Jong-Hyun Lee;Sang-Hyun Lee
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.1
    • /
    • pp.320-324
    • /
    • 2023
  • As urbanization accelerates and facilities that use energy increase, human life and property damage due to fire is increasing. Therefore, a fire monitoring system capable of quickly detecting a fire is required to reduce economic loss and human damage caused by a fire. In this study, we aim to develop an improved artificial intelligence model that can increase the accuracy of low fire alarms by mixing DeepSORT, which has strengths in object tracking, with the YOLOv5s model. In order to develop a fire detection model that is faster and more accurate than the existing artificial intelligence model, DeepSORT, a technology that complements and extends SORT as one of the most widely used frameworks for object tracking and YOLOv5s model, was selected and a mixed model was used and compared with the YOLOv5s model. As the final research result of this paper, the accuracy of YOLOv5s model was 96.3% and the number of frames per second was 30, and the YOLOv5s_DeepSORT mixed model was 0.9% higher in accuracy than YOLOv5s with an accuracy of 97.2% and number of frames per second: 30.

Metal Surface Defect Detection and Classification using EfficientNetV2 and YOLOv5 (EfficientNetV2 및 YOLOv5를 사용한 금속 표면 결함 검출 및 분류)

  • Alibek, Esanov;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.577-586
    • /
    • 2022
  • Detection and classification of steel surface defects are critical for product quality control in the steel industry. However, due to its low accuracy and slow speed, the traditional approach cannot be effectively used in a production line. The current, widely used algorithm (based on deep learning) has an accuracy problem, and there are still rooms for development. This paper proposes a method of steel surface defect detection combining EfficientNetV2 for image classification and YOLOv5 as an object detector. Shorter training time and high accuracy are advantages of this model. Firstly, the image input into EfficientNetV2 model classifies defect classes and predicts probability of having defects. If the probability of having a defect is less than 0.25, the algorithm directly recognizes that the sample has no defects. Otherwise, the samples are further input into YOLOv5 to accomplish the defect detection process on the metal surface. Experiments show that proposed model has good performance on the NEU dataset with an accuracy of 98.3%. Simultaneously, the average training speed is shorter than other models.

A Performance Comparison of Land-Based Floating Debris Detection Based on Deep Learning and Its Field Applications (딥러닝 기반 육상기인 부유쓰레기 탐지 모델 성능 비교 및 현장 적용성 평가)

  • Suho Bak;Seon Woong Jang;Heung-Min Kim;Tak-Young Kim;Geon Hui Ye
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.193-205
    • /
    • 2023
  • A large amount of floating debris from land-based sources during heavy rainfall has negative social, economic, and environmental impacts, but there is a lack of monitoring systems for floating debris accumulation areas and amounts. With the recent development of artificial intelligence technology, there is a need to quickly and efficiently study large areas of water systems using drone imagery and deep learning-based object detection models. In this study, we acquired various images as well as drone images and trained with You Only Look Once (YOLO)v5s and the recently developed YOLO7 and YOLOv8s to compare the performance of each model to propose an efficient detection technique for land-based floating debris. The qualitative performance evaluation of each model showed that all three models are good at detecting floating debris under normal circumstances, but the YOLOv8s model missed or duplicated objects when the image was overexposed or the water surface was highly reflective of sunlight. The quantitative performance evaluation showed that YOLOv7 had the best performance with a mean Average Precision (intersection over union, IoU 0.5) of 0.940, which was better than YOLOv5s (0.922) and YOLOv8s (0.922). As a result of generating distortion in the color and high-frequency components to compare the performance of models according to data quality, the performance degradation of the YOLOv8s model was the most obvious, and the YOLOv7 model showed the lowest performance degradation. This study confirms that the YOLOv7 model is more robust than the YOLOv5s and YOLOv8s models in detecting land-based floating debris. The deep learning-based floating debris detection technique proposed in this study can identify the spatial distribution of floating debris by category, which can contribute to the planning of future cleanup work.

Modified YOLOv4S based on Deep learning with Feature Fusion and Spatial Attention (특징 융합과 공간 강조를 적용한 딥러닝 기반의 개선된 YOLOv4S)

  • Hwang, Beom-Yeon;Lee, Sang-Hun;Lee, Seung-Hyun
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.31-37
    • /
    • 2021
  • In this paper proposed a feature fusion and spatial attention-based modified YOLOv4S for small and occluded detection. Conventional YOLOv4S is a lightweight network and lacks feature extraction capability compared to the method of the deep network. The proposed method first combines feature maps of different scales with feature fusion to enhance semantic and low-level information. In addition expanding the receptive field with dilated convolution, the detection accuracy for small and occluded objects was improved. Second by improving the conventional spatial information with spatial attention, the detection accuracy of objects classified and occluded between objects was improved. PASCAL VOC and COCO datasets were used for quantitative evaluation of the proposed method. The proposed method improved mAP by 2.7% in the PASCAL VOC dataset and 1.8% in the COCO dataset compared to the Conventional YOLOv4S.

Real-Time Fire Detection Method Using YOLOv8 (YOLOv8을 이용한 실시간 화재 검출 방법)

  • Tae Hee Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF