• Title/Summary/Keyword: YOLOv5s

Search Result 47, Processing Time 0.027 seconds

A system for automatically generating activity photos of infants based on facial recognition in a multi-camera environment (다중 카메라 환경에서의 안면인식 기반의 영유아 활동 사진 자동 생성 시스템)

  • Jung-seok Lee;Kyu-ho Lee;Kun-hee Kim;Chang-hun Choi;Kyoung-ro Park;Ho-joun Son;Hongseok Yoo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.481-483
    • /
    • 2023
  • 본 논문에서는 다중 카메라환경에서의 안면인식 기반 영유아 활동 사진 자동 생성 시스템을 개발했다. 개발한 시스템은 어린이집에서 알림장 작성을 위한 촬영하는 동안 보육에 부주의하여 안전사고가 발생하는 것을 방지 할 수 있다. 시스템은 이동식 수집기와 분류 서버로 나뉘어 작동하게 된다. 이동식 수집기는 Raspberry Pi를 이용하였고 초당 1장 내외의 사진을 촬영하여 SAMBA를 사용 공유폴더에 저장한다. 분류 서버에서는 YOLOv5를 사용해 안면을 인식해 분류한다. OpenCV와 TensorFlow-Keras를 통해 분류된 사진에서의 표정을 파악하여 부모에게 전송할 웃는사진만을 분류하여 남겨둔다. 이외의 사진은 /dev/null로 이동하여 삭제된다.

  • PDF

Analysis of Floating Population in Schools Using Open Source Hardware and Deep Learning-Based Object Detection Algorithm (오픈소스 하드웨어와 딥러닝 기반 객체 탐지 알고리즘을 활용한 교내 유동인구 분석)

  • Kim, Bo-Ram;Im, Yun-Gyo;Shin, Sil;Lee, Jin-Hyeok;Chu, Sung-Won;Kim, Na-Kyeong;Park, Mi-So;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.91-98
    • /
    • 2022
  • In this study, Pukyong National University's floating population survey and analysis were conducted using Raspberry Pie, an open source hardware, and object detection algorithms based on deep learning technology. After collecting images using Raspberry Pie, the person detection of the collected images using YOLO3's IMAGEAI and YOLOv5 models was performed, and Haar-like features and HOG models were used for accuracy comparison analysis. As a result of the analysis, the smallest floating population was observed due to the school anniversary. In general, the floating population at the entrance was larger than the floating population at the exit, and both the entrance and exit were found to be greatly affected by the school's anniversary and events.

Object Detection for the Visually Impaired in a Voice Guidance System (시각장애인을 위한 보행 안내 시스템의 객체 인식)

  • Soo-Yeon Son;Eunho-Jeong;Hyon Hee Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1206-1207
    • /
    • 2023
  • 보행의 제한은 시각장애인의 자립적인 생활을 어렵게 하며 안전에도 큰 영향을 끼친다. 본 논문은 YOLOv5(You Only Look Once version 5)를 활용하여 안전한 보행을 돕는 방법을 제시한다. 제시하는 방법은 자동차나 자전거, 전동킥보드 등의 움직이는 사물과 사람을 실시간으로 인식하여 시각장애인에게 알림으로써 보행에 도움을 줄 수 있으며 시각장애인의 안전한 보행에 도움을 줄 것이라 기대한다.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.

Research on Improving the Performance of YOLO-Based Object Detection Models for Smoke and Flames from Different Materials (다양한 재료에서 발생되는 연기 및 불꽃에 대한 YOLO 기반 객체 탐지 모델 성능 개선에 관한 연구 )

  • Heejun Kwon;Bohee Lee;Haiyoung Jung
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.37 no.3
    • /
    • pp.261-273
    • /
    • 2024
  • This paper is an experimental study on the improvement of smoke and flame detection from different materials with YOLO. For the study, images of fires occurring in various materials were collected through an open dataset, and experiments were conducted by changing the main factors affecting the performance of the fire object detection model, such as the bounding box, polygon, and data augmentation of the collected image open dataset during data preprocessing. To evaluate the model performance, we calculated the values of precision, recall, F1Score, mAP, and FPS for each condition, and compared the performance of each model based on these values. We also analyzed the changes in model performance due to the data preprocessing method to derive the conditions that have the greatest impact on improving the performance of the fire object detection model. The experimental results showed that for the fire object detection model using the YOLOv5s6.0 model, data augmentation that can change the color of the flame, such as saturation, brightness, and exposure, is most effective in improving the performance of the fire object detection model. The real-time fire object detection model developed in this study can be applied to equipment such as existing CCTV, and it is believed that it can contribute to minimizing fire damage by enabling early detection of fires occurring in various materials.

A Study on the Artificial Intelligence-Based Soybean Growth Analysis Method (인공지능 기반 콩 생장분석 방법 연구)

  • Moon-Seok Jeon;Yeongtae Kim;Yuseok Jeong;Hyojun Bae;Chaewon Lee;Song Lim Kim;Inchan Choi
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.1-14
    • /
    • 2023
  • Soybeans are one of the world's top five staple crops and a major source of plant-based protein. Due to their susceptibility to climate change, which can significantly impact grain production, the National Agricultural Science Institute is conducting research on crop phenotypes through growth analysis of various soybean varieties. While the process of capturing growth progression photos of soybeans is automated, the verification, recording, and analysis of growth stages are currently done manually. In this paper, we designed and trained a YOLOv5s model to detect soybean leaf objects from image data of soybean plants and a Convolution Neural Network (CNN) model to judgement the unfolding status of the detected soybean leaves. We combined these two models and implemented an algorithm that distinguishes layers based on the coordinates of detected soybean leaves. As a result, we developed a program that takes time-series data of soybeans as input and performs growth analysis. The program can accurately determine the growth stages of soybeans up to the second or third compound leaves.

A Study on Deep learning-based crop surface inspection automation system (딥러닝 기반 농작물 표면 검사 자동화 시스템 연구)

  • Kim, W.J.;Kim, S.B.;Kim, M.J.;Kim, M.J.;Kim, S.H.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.758-760
    • /
    • 2022
  • 본 연구는 머신러닝의 한 종류인 YOLOv5를 이용하여 기존 육안 선별작업을 자동화 하는 기계를 설계하는 것이다. 본 연구에서는 영상촬영과 선별작업을 진행하는 컨베이어 기구와 선별 프로그램을 제작하고, 모든 표면을 검사해 사과의 품질을 3단계로 구별하는 작업을 진행하였다. 결과적으로 투입된 사과의 품질을 성공적으로 분류 하였다.

Study On Masked Face Detection And Recognition using transfer learning

  • Kwak, NaeJoung;Kim, DongJu
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.294-301
    • /
    • 2022
  • COVID-19 is a crisis with numerous casualties. The World Health Organization (WHO) has declared the use of masks as an essential safety measure during the COVID-19 pandemic. Therefore, whether or not to wear a mask is an important issue when entering and exiting public places and institutions. However, this makes face recognition a very difficult task because certain parts of the face are hidden. As a result, face identification and identity verification in the access system became difficult. In this paper, we propose a system that can detect masked face using transfer learning of Yolov5s and recognize the user using transfer learning of Facenet. Transfer learning preforms by changing the learning rate, epoch, and batch size, their results are evaluated, and the best model is selected as representative model. It has been confirmed that the proposed model is good at detecting masked face and masked face recognition.

A Study on Image Preprocessing Methods for Automatic Detection of Ship Corrosion Based on Deep Learning (딥러닝 기반 선박 부식 자동 검출을 위한 이미지 전처리 방안 연구)

  • Yun, Gwang-ho;Oh, Sang-jin;Shin, Sung-chul
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.4_2
    • /
    • pp.573-586
    • /
    • 2022
  • Corrosion can cause dangerous and expensive damage and failures of ship hulls and equipment. Therefore, it is necessary to maintain the vessel by periodic corrosion inspections. During visual inspection, many corrosion locations are inaccessible for many reasons, especially safety's point of view. Including subjective decisions of inspectors is one of the issues of visual inspection. Automation of visual inspection is tried by many pieces of research. In this study, we propose image preprocessing methods by image patch segmentation and thresholding. YOLOv5 was used as an object detection model after the image preprocessing. Finally, it was evaluated that corrosion detection performance using the proposed method was improved in terms of mean average precision.

A study on accident prevention AI system based on estimation of bus passengers' intentions (시내버스 승하차 의도분석 기반 사고방지 AI 시스템 연구)

  • Seonghwan Park;Sunoh Byun;Junghoon Park
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.57-66
    • /
    • 2023
  • In this paper, we present a study on an AI-based system utilizing the CCTV system within city buses to predict the intentions of boarding and alighting passengers, with the aim of preventing accidents. The proposed system employs the YOLOv7 Pose model to detect passengers, while utilizing an LSTM model to predict intentions of tracked passengers. The system can be installed on the bus's CCTV terminals, allowing for real-time visual confirmation of passengers' intentions throughout driving. It also provides alerts to the driver, mitigating potential accidents during passenger transitions. Test results show accuracy rates of 0.81 for analyzing boarding intentions and 0.79 for predicting alighting intentions onboard. To ensure real-time performance, we verified that a minimum of 5 frames per second analysis is achievable in a GPU environment. his algorithm enhance the safety of passenger transitions during bus operations. In the future, with improved hardware specifications and abundant data collection, the system's expansion into various safety-related metrics is promising. This algorithm is anticipated to play a pivotal role in ensuring safety when autonomous driving becomes commercialized. Additionally, its applicability could extend to other modes of public transportation, such as subways and all forms of mass transit, contributing to the overall safety of public transportation systems.