• Title/Summary/Keyword: YOLOv5 Model

Search Result 93, Processing Time 0.025 seconds

A Study on the Elevator System Using Real-time Object Detection Technology YOLOv5 (실시간 객체 검출 기술 YOLOv5를 이용한 스마트 엘리베이터 시스템에 관한 연구)

  • Sun-Been Park;Yu-Jeong Jeong;Da-Eun Lee;Tae-Kook Kim
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.103-108
    • /
    • 2024
  • In this paper, a smart elevator system was studied using real-time object detection technology based on YOLO(You only look once)v5. When an external elevator button is pressed, the YOLOv5 model analyzes the camera video to determine whether there are people waiting, and if it determines that there are no people waiting, the button is automatically canceled. The study introduces an effective method of implementing object detection and communication technology through YOLOv5 and MQTT (Message Queuing Telemetry Transport) used in the Internet of Things. And using this, we implemented a smart elevator system that determines in real time whether there are people waiting. The proposed system can play the role of CCTV (closed-circuit television) while reducing unnecessary power consumption. Therefore, the proposed smart elevator system is expected to contribute to safety and security issues.

A deep learning approach to permanent tooth germ detection on pediatric panoramic radiographs

  • Kaya, Emine;Gunec, Huseyin Gurkan;Aydin, Kader Cesur;Urkmez, Elif Seyda;Duranay, Recep;Ates, Hasan Fehmi
    • Imaging Science in Dentistry
    • /
    • v.52 no.3
    • /
    • pp.275-281
    • /
    • 2022
  • Purpose: The aim of this study was to assess the performance of a deep learning system for permanent tooth germ detection on pediatric panoramic radiographs. Materials and Methods: In total, 4518 anonymized panoramic radiographs of children between 5 and 13 years of age were collected. YOLOv4, a convolutional neural network (CNN)-based object detection model, was used to automatically detect permanent tooth germs. Panoramic images of children processed in LabelImg were trained and tested in the YOLOv4 algorithm. True-positive, false-positive, and false-negative rates were calculated. A confusion matrix was used to evaluate the performance of the model. Results: The YOLOv4 model, which detected permanent tooth germs on pediatric panoramic radiographs, provided an average precision value of 94.16% and an F1 value of 0.90, indicating a high level of significance. The average YOLOv4 inference time was 90 ms. Conclusion: The detection of permanent tooth germs on pediatric panoramic X-rays using a deep learning-based approach may facilitate the early diagnosis of tooth deficiency or supernumerary teeth and help dental practitioners find more accurate treatment options while saving time and effort

Development of Disabled Parking System Using Deep Learning Model (딥러닝 모델을 적용한 장애인 주차구역 단속시스템의 개발)

  • Lee, Jiwon;Lee, Dongjin;Jang, Jongwook;Jang, Sungjin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.175-177
    • /
    • 2021
  • The parking area for the disabled is a parking facility for the pedestrian disabled and is a parking space for securing pedestrian safety passage for the disabled. However, due to the lack of social awareness of areas for the disabled, the use of parking areas is restricted, and violations such as illegal parking and obstruction of parking are increasing every year. Therefore, in this study, we propose a system to crack down on illegal parking in handicapped parking areas using the YOLOv5 model, a deep learning object recognition model to improve parking interference within parking spaces.

  • PDF

Recyclable Objects Detection via Bounding Box CutMix and Standardized Distance-based IoU (Bounding Box CutMix와 표준화 거리 기반의 IoU를 통한 재활용품 탐지)

  • Lee, Haejin;Jung, Heechul
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.289-296
    • /
    • 2022
  • In this paper, we developed a deep learning-based recyclable object detection model. The model is developed based on YOLOv5 that is a one-stage detector. The deep learning model detects and classifies the recyclable object into 7 categories: paper, carton, can, glass, pet, plastic, and vinyl. We propose two methods for recyclable object detection models to solve problems during training. Bounding Box CutMix solved the no-objects training images problem of Mosaic, a data augmentation used in YOLOv5. Standardized Distance-based IoU replaced DIoU using a normalization factor that is not affected by the center point distance of the bounding boxes. The recyclable object detection model showed a final mAP performance of 0.91978 with Bounding Box CutMix and 0.91149 with Standardized Distance-based IoU.

A Study on the Fallen Patient Detection Model in Indoor Hospital Using YOLOv5 (YOLOv5를 이용한 병원 내부환경에서의 환자 낙상 탐지모델에 관한 연구)

  • Hong, Sang-Hoon;Bae, Hyun-Jae
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.93-94
    • /
    • 2022
  • 최근 고령화 사회가 심각한 사회적 문제로 급부상하고 있으며, 이에 병원을 찾아 입원하는 비중이 이전에 비하여 높아지고 있다. 거동이 불편하거나 근력이 부족한 환자의 경우 스스로 거동할 능력이 다소 떨어지며, 낙상사고가 발생하면 부상 혹은 치명적일 경우 사망으로 이어질 수 있다. 하지만, 이들을 보살피는 간호 인력만으로 병원 내 모든 낙상사고를 파악하기에는 한계가 있다. 또한, 환자들의 낙상 탐지에 관한 연구는 지속해서 수행되어왔지만, 병원 내부환경에서의 낙상 탐지 연구는 부족하다. 이에 본 논문에서는 병원 내부환경에서 낙상을 탐지하기 위해 실제 병실에서 수집한 데이터로 YOLOv5 모델을 학습하여 환자 낙상 탐지모델을 구축 및 평가하였다.

  • PDF

A Study on the Model for Determining Strawberry Disease Using YOLOv5 (YOLOv5를 이용한 딸기 병해 판별 모델 연구)

  • Jinhwan Yang;Hyungsik Joo;Bokyung Shin;Jinsuk Bang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.709-710
    • /
    • 2023
  • 최근 농가 인구의 고령화 심화로 인한 농업 인력 감소로 농업 지속 가능성이 위협받고 있다. 국내 농가의 주요 형태인 시설 재배지에서는 병해에 의한 연쇄 피해가 발생할 수 있으므로 농업 생산성 증대를 위해 병해의 조기 진단이 필요하다. 본 논문에서는 병해의 조기 진단과 대처를 위해 YOLOv5를 이용한 딸기 병해 진단 모델을 제작, 데이터셋과 학습 세부사항에 변화를 주며 실험하였다. 실험 결과 데이터셋과 epochs 증량은 모델 성능에 영향을 주지만 임계점에 다다르면 성능 향상에 도움이 되지 않는 것을 알 수 있었다. 한편 학습한 모델 중 가장 좋은 성능을 가진 모델의 경우 F1 Score 0.98, mAP 0.99를 나타내 높은 정확도로 딸기의 병해 여부 진단이 가능하였다.

Ship Detection from SAR Images Using YOLO: Model Constructions and Accuracy Characteristics According to Polarization (YOLO를 이용한 SAR 영상의 선박 객체 탐지: 편파별 모델 구성과 정확도 특성 분석)

  • Yungyo Im;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Youngmin Seo;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.997-1008
    • /
    • 2023
  • Ship detection at sea can be performed in various ways. In particular, satellites can provide wide-area surveillance, and Synthetic Aperture Radar (SAR) imagery can be utilized day and night and in all weather conditions. To propose an efficient ship detection method from SAR images, this study aimed to apply the You Only Look Once Version 5 (YOLOv5) model to Sentinel-1 images and to analyze the difference between individual vs. integrated models and the accuracy characteristics by polarization. YOLOv5s, which has fewer and lighter parameters, and YOLOv5x, which has more parameters but higher accuracy, were used for the performance tests (1) by dividing each polarization into HH, HV, VH, and VV, and (2) by using images from all polarizations. All four experiments showed very similar and high accuracy of 0.977 ≤ AP@0.5 ≤ 0.998. This result suggests that the polarization integration model using lightweight YOLO models can be the most effective in terms of real-time system deployment. 19,582 images were used in this experiment. However, if other SAR images,such as Capella and ICEYE, are included in addition to Sentinel-1 images, a more flexible and accurate model for ship detection can be built.

Realtime Detection of Benthic Marine Invertebrates from Underwater Images: A Comparison betweenYOLO and Transformer Models (수중영상을 이용한 저서성 해양무척추동물의 실시간 객체 탐지: YOLO 모델과 Transformer 모델의 비교평가)

  • Ganghyun Park;Suho Bak;Seonwoong Jang;Shinwoo Gong;Jiwoo Kwak;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.909-919
    • /
    • 2023
  • Benthic marine invertebrates, the invertebrates living on the bottom of the ocean, are an essential component of the marine ecosystem, but excessive reproduction of invertebrate grazers or pirate creatures can cause damage to the coastal fishery ecosystem. In this study, we compared and evaluated You Only Look Once Version 7 (YOLOv7), the most widely used deep learning model for real-time object detection, and detection tansformer (DETR), a transformer-based model, using underwater images for benthic marine invertebratesin the coasts of South Korea. YOLOv7 showed a mean average precision at 0.5 (mAP@0.5) of 0.899, and DETR showed an mAP@0.5 of 0.862, which implies that YOLOv7 is more appropriate for object detection of various sizes. This is because YOLOv7 generates the bounding boxes at multiple scales that can help detect small objects. Both models had a processing speed of more than 30 frames persecond (FPS),so it is expected that real-time object detection from the images provided by divers and underwater drones will be possible. The proposed method can be used to prevent and restore damage to coastal fisheries ecosystems, such as rescuing invertebrate grazers and creating sea forests to prevent ocean desertification.

Real-Time Comprehensive Assistance for Visually Impaired Navigation

  • Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.1-10
    • /
    • 2024
  • Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life.

A Scene-Specific Object Detection System Utilizing the Advantages of Fixed-Location Cameras

  • Jin Ho Lee;In Su Kim;Hector Acosta;Hyeong Bok Kim;Seung Won Lee;Soon Ki Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.329-336
    • /
    • 2023
  • This paper introduces an edge AI-based scene-specific object detection system for long-term traffic management, focusing on analyzing congestion and movement via cameras. It aims to balance fast processing and accuracy in traffic flow data analysis using edge computing. We adapt the YOLOv5 model, with four heads, to a scene-specific model that utilizes the fixed camera's scene-specific properties. This model selectively detects objects based on scale by blocking nodes, ensuring only objects of certain sizes are identified. A decision module then selects the most suitable object detector for each scene, enhancing inference speed without significant accuracy loss, as demonstrated in our experiments.