• Title/Summary/Keyword: YOLOv10

Search Result 268, Processing Time 0.027 seconds

Vehicle Detection in Dense Area Using UAV Aerial Images (무인 항공기를 이용한 밀집영역 자동차 탐지)

  • Seo, Chang-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.693-698
    • /
    • 2018
  • This paper proposes a vehicle detection method for parking areas using unmanned aerial vehicles (UAVs) and using YOLOv2, which is a recent, known, fast, object-detection real-time algorithm. The YOLOv2 convolutional network algorithm can calculate the probability of each class in an entire image with a one-pass evaluation, and can also predict the location of bounding boxes. It has the advantage of very fast, easy, and optimized-at-detection performance, because the object detection process has a single network. The sliding windows methods and region-based convolutional neural network series detection algorithms use a lot of region proposals and take too much calculation time for each class. So these algorithms have a disadvantage in real-time applications. This research uses the YOLOv2 algorithm to overcome the disadvantage that previous algorithms have in real-time processing problems. Using Darknet, OpenCV, and the Compute Unified Device Architecture as open sources for object detection. a deep learning server is used for the learning and detecting process with each car. In the experiment results, the algorithm could detect cars in a dense area using UAVs, and reduced overhead for object detection. It could be applied in real time.

Parking Lot Vehicle Counting Using a Deep Convolutional Neural Network (Deep Convolutional Neural Network를 이용한 주차장 차량 계수 시스템)

  • Lim, Kuoy Suong;Kwon, Jang woo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.173-187
    • /
    • 2018
  • This paper proposes a computer vision and deep learning-based technique for surveillance camera system for vehicle counting as one part of parking lot management system. We applied the You Only Look Once version 2 (YOLOv2) detector and come up with a deep convolutional neural network (CNN) based on YOLOv2 with a different architecture and two models. The effectiveness of the proposed architecture is illustrated using a publicly available Udacity's self-driving-car datasets. After training and testing, our proposed architecture with new models is able to obtain 64.30% mean average precision which is a better performance compare to the original architecture (YOLOv2) that achieved only 47.89% mean average precision on the detection of car, truck, and pedestrian.

Shipping Container Load State and Accident Risk Detection Techniques Based Deep Learning (딥러닝 기반 컨테이너 적재 정렬 상태 및 사고 위험도 검출 기법)

  • Yeon, Jeong Hum;Seo, Yong Uk;Kim, Sang Woo;Oh, Se Yeong;Jeong, Jun Ho;Park, Jin Hyo;Kim, Sung-Hee;Youn, Joosang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.411-418
    • /
    • 2022
  • Incorrectly loaded containers can easily knock down by strong winds. Container collapse accidents can lead to material damage and paralysis of the port system. In this paper, We propose a deep learning-based container loading state and accident risk detection technique. Using Darknet-based YOLO, the container load status identifies in real-time through corner casting on the top and bottom of the container, and the risk of accidents notifies the manager. We present criteria for classifying container alignment states and select efficient learning algorithms based on inference speed, classification accuracy, detection accuracy, and FPS in real embedded devices in the same environment. The study found that YOLOv4 had a weaker inference speed and performance of FPS than YOLOv3, but showed strong performance in classification accuracy and detection accuracy.

Development of artificial intelligent system for visual assistance to the Visually Handicapped (시각장애인을 위한 시각 도움 서비스를 제공하는 인공지능 시스템 개발)

  • Oh, Changhyeon;Choi, Gwangyo;Lee, Hoyoung
    • Annual Conference of KIPS
    • /
    • 2021.11a
    • /
    • pp.1290-1293
    • /
    • 2021
  • Currently, blind people are experiencing a lot of inconvenience in their daily lives. In order to provide helpful service for the visually impaired, this study was carried out to make a new smart glasses that transmit information monitoring walking environment in real-time object recognition. In terms of object recognition, YOLOv4 was used as the artificial intelligence model. The objects, that should be identified during walking of the visually impaired, were selected, and the learning data was populated from them and re-learning of YOLOv4 was performed. As a result, the accuracy was average of 68% for all objects, but for essential objects (Person, Bus, Car, Traffic_light, Bicycle, Motorcycle) was measured to be 84%. In the future, it is necessary to secure the learning data in more various ways and conduct CNN learning with various parameters using darkflow rather than YOLOv4 to perform comparisons in the various ways.

Revolutionizing Traffic Sign Recognition with YOLOv9 and CNNs

  • Muteb Alshammari;Aadil Alshammari
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.8
    • /
    • pp.14-20
    • /
    • 2024
  • Traffic sign recognition is an essential feature of intelligent transportation systems and Advanced Driver Assistance Systems (ADAS), which are necessary for improving road safety and advancing the development of autonomous cars. This research investigates the incorporation of the YOLOv9 model into traffic sign recognition systems, utilizing its sophisticated functionalities such as Programmable Gradient Information (PGI) and Generalized Efficient Layer Aggregation Network (GELAN) to tackle enduring difficulties in object detection. We employed a publically accessible dataset obtained from Roboflow, which consisted of 3130 images classified into five distinct categories: speed_40, speed_60, stop, green, and red. The dataset was separated into training (68%), validation (21%), and testing (12%) subsets in a methodical manner to ensure a thorough examination. Our comprehensive trials have shown that YOLOv9 obtains a mean Average Precision (mAP@0.5) of 0.959, suggesting exceptional precision and recall for the majority of traffic sign classes. However, there is still potential for improvement specifically in the red traffic sign class. An analysis was conducted on the distribution of instances among different traffic sign categories and the differences in size within the dataset. This analysis aimed to guarantee that the model would perform well in real-world circumstances. The findings validate that YOLOv9 substantially improves the precision and dependability of traffic sign identification, establishing it as a dependable option for implementation in intelligent transportation systems and ADAS. The incorporation of YOLOv9 in real-world traffic sign recognition and classification tasks demonstrates its promise in making roadways safer and more efficient.

A Study of Tram-Pedestrian Collision Prediction Method Using YOLOv5 and Motion Vector (YOLOv5와 모션벡터를 활용한 트램-보행자 충돌 예측 방법 연구)

  • Kim, Young-Min;An, Hyeon-Uk;Jeon, Hee-gyun;Kim, Jin-Pyeong;Jang, Gyu-Jin;Hwang, Hyeon-Chyeol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.561-568
    • /
    • 2021
  • In recent years, autonomous driving technologies have become a high-value-added technology that attracts attention in the fields of science and industry. For smooth Self-driving, it is necessary to accurately detect an object and estimate its movement speed in real time. CNN-based deep learning algorithms and conventional dense optical flows have a large consumption time, making it difficult to detect objects and estimate its movement speed in real time. In this paper, using a single camera image, fast object detection was performed using the YOLOv5 algorithm, a deep learning algorithm, and fast estimation of the speed of the object was performed by using a local dense optical flow modified from the existing dense optical flow based on the detected object. Based on this algorithm, we present a system that can predict the collision time and probability, and through this system, we intend to contribute to prevent tram accidents.

A study on Detecting the Safety helmet wearing using YOLOv5-S model and transfer learning

  • Kwak, NaeJoung;Kim, DongJu
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.302-309
    • /
    • 2022
  • Occupational safety accidents are caused by various factors, and it is difficult to predict when and why they occur, and it is directly related to the lives of workers, so the interest in safety accidents is increasing every year. Therefore, in order to reduce safety accidents at industrial fields, workers are required to wear personal protective equipment. In this paper, we proposes a method to automatically check whether workers are wearing safety helmets among the protective equipment in the industrial field. It detects whether or not the helmet is worn using YOLOv5, a computer vision-based deep learning object detection algorithm. We transfer learning the s model among Yolov5 models with different learning rates and epochs, evaluate the performance, and select the optimal model. The selected model showed a performance of 0.959 mAP.

A Comparative Study on the Object Detection of Deposited Marine Debris (DMD) Using YOLOv5 and YOLOv7 Models (YOLOv5와 YOLOv7 모델을 이용한 해양침적쓰레기 객체탐지 비교평가)

  • Park, Ganghyun;Youn, Youjeong;Kang, Jonggu;Kim, Geunah;Choi, Soyeon;Jang, Seonwoong;Bak, Suho;Gong, Shinwoo;Kwak, Jiwoo;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1643-1652
    • /
    • 2022
  • Deposited Marine Debris(DMD) can negatively affect marine ecosystems, fishery resources, and maritime safety and is mainly detected by sonar sensors, lifting frames, and divers. Considering the limitation of cost and time, recent efforts are being made by integrating underwater images and artificial intelligence (AI). We conducted a comparative study of You Only Look Once Version 5 (YOLOv5) and You Only Look Once Version 7 (YOLOv7) models to detect DMD from underwater images for more accurate and efficient management of DMD. For the detection of the DMD objects such as glass, metal, fish traps, tires, wood, and plastic, the two models showed a performance of over 0.85 in terms of Mean Average Precision (mAP@0.5). A more objective evaluation and an improvement of the models are expected with the construction of an extensive image database.

A Study on Falling Detection of Workers in the Underground Utility Tunnel using Dual Deep Learning Techniques (이중 딥러닝 기법을 활용한 지하공동구 작업자의 쓰러짐 검출 연구)

  • Jeongsoo Kim;Sangmi Park;Changhee Hong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.498-509
    • /
    • 2023
  • Purpose: This paper proposes a method detecting the falling of a maintenance worker in the underground utility tunnel, by applying deep learning techniques using CCTV video, and evaluates the applicability of the proposed method to the worker monitoring of the utility tunnel. Method: Each rule was designed to detect the falling of a maintenance worker by using the inference results from pre-trained YOLOv5 and OpenPose models, respectively. The rules were then integrally applied to detect worker falls within the utility tunnel. Result: Although the worker presence and falling were detected by the proposed model, the inference results were dependent on both the distance between the worker and CCTV and the falling direction of the worker. Additionally, the falling detection system using YOLOv5 shows superior performance, due to its lower dependence on distance and fall direction, compared to the OpenPose-based. Consequently, results from the fall detection using the integrated dual deep learning model were dependent on the YOLOv5 detection performance. Conclusion: The proposed hybrid model shows detecting an abnormal worker in the utility tunnel but the improvement of the model was meaningless compared to the single model based YOLOv5 due to severe differences in detection performance between each deep learning model

Applicability Evaluation of Deep Learning-Based Object Detection for Coastal Debris Monitoring: A Comparative Study of YOLOv8 and RT-DETR (해안쓰레기 탐지 및 모니터링에 대한 딥러닝 기반 객체 탐지 기술의 적용성 평가: YOLOv8과 RT-DETR을 중심으로)

  • Suho Bak;Heung-Min Kim;Youngmin Kim;Inji Lee;Miso Park;Seungyeol Oh;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1195-1210
    • /
    • 2023
  • Coastal debris has emerged as a salient issue due to its adverse effects on coastal aesthetics, ecological systems, and human health. In pursuit of effective countermeasures, the present study delineated the construction of a specialized image dataset for coastal debris detection and embarked on a comparative analysis between two paramount real-time object detection algorithms, YOLOv8 and RT-DETR. Rigorous assessments of robustness under multifarious conditions were instituted, subjecting the models to assorted distortion paradigms. YOLOv8 manifested a detection accuracy with a mean Average Precision (mAP) value ranging from 0.927 to 0.945 and an operational speed between 65 and 135 Frames Per Second (FPS). Conversely, RT-DETR yielded an mAP value bracket of 0.917 to 0.918 with a detection velocity spanning 40 to 53 FPS. While RT-DETR exhibited enhanced robustness against color distortions, YOLOv8 surpassed resilience under other evaluative criteria. The implications derived from this investigation are poised to furnish pivotal directives for algorithmic selection in the practical deployment of marine debris monitoring systems.