• Title/Summary/Keyword: yolo

Search Result 393, Processing Time 0.028 seconds

A Study on Vehicle License Plate Recognition System through Fake License Plate Generator in YOLOv5 (YOLOv5에서 가상 번호판 생성을 통한 차량 번호판 인식 시스템에 관한 연구)

  • Ha, Sang-Hyun;Jeong, Seok Chan;Jeon, Young-Joon;Jang, Mun-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.699-706
    • /
    • 2021
  • Existing license plate recognition system is used as an optical character recognition method, but a method of using deep learning has been proposed in recent studies because it has problems with image quality and Korean misrecognition. This requires a lot of data collection, but the collection of license plates is not easy to collect due to the problem of the Personal Information Protection Act, and labeling work to designate the location of individual license plates is required, but it also requires a lot of time. Therefore, in this paper, to solve this problem, five types of license plates were created using a virtual Korean license plate generation program according to the notice of the Ministry of Land, Infrastructure and Transport. And the generated license plate is synthesized in the license plate part of collectable vehicle images to construct 10,147 learning data to be used in deep learning. The learning data classifies license plates, Korean, and numbers into individual classes and learn using YOLOv5. Since the proposed method recognizes letters and numbers individually, if the font does not change, it can be recognized even if the license plate standard changes or the number of characters increases. As a result of the experiment, an accuracy of 96.82% was obtained, and it can be applied not only to the learned license plate but also to new types of license plates such as new license plates and eco-friendly license plates.

Updating Obstacle Information Using Object Detection in Street-View Images (스트리트뷰 영상의 객체탐지를 활용한 보행 장애물 정보 갱신)

  • Park, Seula;Song, Ahram
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.599-607
    • /
    • 2021
  • Street-view images, which are omnidirectional scenes centered on a specific location on the road, can provide various obstacle information for the pedestrians. Pedestrian network data for the navigation services should reflect the up-to-date obstacle information to ensure the mobility of pedestrians, including people with disabilities. In this study, the object detection model was trained for the bollard as a major obstacle in Seoul using street-view images and a deep learning algorithm. Also, a process for updating information about the presence and number of bollards as obstacle properties for the crosswalk node through spatial matching between the detected bollards and the pedestrian nodes was proposed. The missing crosswalk information can also be updated concurrently by the proposed process. The proposed approach is appropriate for crowdsourcing data as the model trained using the street-view images can be applied to photos taken with a smartphone while walking. Through additional training with various obstacles captured in the street-view images, it is expected to enable efficient information update about obstacles on the road.

A deep learning-based approach for feeding behavior recognition of weanling pigs

  • Kim, MinJu;Choi, YoHan;Lee, Jeong-nam;Sa, SooJin;Cho, Hyun-chong
    • Journal of Animal Science and Technology
    • /
    • v.63 no.6
    • /
    • pp.1453-1463
    • /
    • 2021
  • Feeding is the most important behavior that represents the health and welfare of weanling pigs. The early detection of feed refusal is crucial for the control of disease in the initial stages and the detection of empty feeders for adding feed in a timely manner. This paper proposes a real-time technique for the detection and recognition of small pigs using a deep-leaning-based method. The proposed model focuses on detecting pigs on a feeder in a feeding position. Conventional methods detect pigs and then classify them into different behavior gestures. In contrast, in the proposed method, these two tasks are combined into a single process to detect only feeding behavior to increase the speed of detection. Considering the significant differences between pig behaviors at different sizes, adaptive adjustments are introduced into a you-only-look-once (YOLO) model, including an angle optimization strategy between the head and body for detecting a head in a feeder. According to experimental results, this method can detect the feeding behavior of pigs and screen non-feeding positions with 95.66%, 94.22%, and 96.56% average precision (AP) at an intersection over union (IoU) threshold of 0.5 for YOLOv3, YOLOv4, and an additional layer and with the proposed activation function, respectively. Drinking behavior was detected with 86.86%, 89.16%, and 86.41% AP at a 0.5 IoU threshold for YOLOv3, YOLOv4, and the proposed activation function, respectively. In terms of detection and classification, the results of our study demonstrate that the proposed method yields higher precision and recall compared to conventional methods.

Worker Collision Safety Management System using Object Detection (객체 탐지를 활용한 근로자 충돌 안전관리 시스템)

  • Lee, Taejun;Kim, Seongjae;Hwang, Chul-Hyun;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1259-1265
    • /
    • 2022
  • Recently, AI, big data, and IoT technologies are being used in various solutions such as fire detection and gas or dangerous substance detection for safety accident prevention. According to the status of occupational accidents published by the Ministry of Employment and Labor in 2021, the accident rate, the number of injured, and the number of deaths have increased compared to 2020. In this paper, referring to the dataset construction guidelines provided by the National Intelligence Service Agency(NIA), the dataset is directly collected from the field and learned with YOLOv4 to propose a collision risk object detection system through object detection. The accuracy of the dangerous situation rule violation was 88% indoors and 92% outdoors. Through this system, it is thought that it will be possible to analyze safety accidents that occur in industrial sites in advance and use them to intelligent platforms research.

SHOMY: Detection of Small Hazardous Objects using the You Only Look Once Algorithm

  • Kim, Eunchan;Lee, Jinyoung;Jo, Hyunjik;Na, Kwangtek;Moon, Eunsook;Gweon, Gahgene;Yoo, Byungjoon;Kyung, Yeunwoong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.8
    • /
    • pp.2688-2703
    • /
    • 2022
  • Research on the advanced detection of harmful objects in airport cargo for passenger safety against terrorism has increased recently. However, because associated studies are primarily focused on the detection of relatively large objects, research on the detection of small objects is lacking, and the detection performance for small objects has remained considerably low. Here, we verified the limitations of existing research on object detection and developed a new model called the Small Hazardous Object detection enhanced and reconstructed Model based on the You Only Look Once version 5 (YOLOv5) algorithm to overcome these limitations. We also examined the performance of the proposed model through different experiments based on YOLOv5, a recently launched object detection model. The detection performance of our model was found to be enhanced by 0.3 in terms of the mean average precision (mAP) index and 1.1 in terms of mAP (.5:.95) with respect to the YOLOv5 model. The proposed model is especially useful for the detection of small objects of different types in overlapping environments where objects of different sizes are densely packed. The contributions of the study are reconstructed layers for the Small Hazardous Object detection enhanced and reconstructed Model based on YOLOv5 and the non-requirement of data preprocessing for immediate industrial application without any performance degradation.

A climbing movement detection system through efficient cow behavior recognition based on YOLOX and OC-SORT (YOLOX와 OC-SORT 기반의 효율적인 소 행동 인식을 통한 승가 운동 감지시스템)

  • LI YU;NamHo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.18-26
    • /
    • 2023
  • In this study, we propose a cow behavior recognition system based on YOLOX and OC-SORT. YOLO X detects targets in real-time and provides information on cow location and behavior. The OC-SORT module tracks cows in the video and assigns unique IDs. The quantitative analysis module analyzes the behavior and location information of cows. Experimental results show that our system demonstrates high accuracy and precision in target detection and tracking. The average precision (AP) of YOLOX was 82.2%, the average recall (AR) was 85.5%, the number of parameters was 54.15M, and the computation was 194.16GFLOPs. OC-SORT was able to maintain high-precision real-time target tracking in complex environments and occlusion situations. By analyzing changes in cow movement and frequency of mounting behavior, our system can help more accurately discern the estrus behavior of cows.

Research on black ice detection using IoT sensors - Building a demonstration infrastructure - (IoT 센서를 이용한 블랙아이스 탐지에 관한 연구 - 실증 인프라 구축 -)

  • Min Woo Son;Byun Hyun Lee;Byung Sik Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.263-263
    • /
    • 2023
  • 블랙아이스는 눈에 쉽게 구분되지 않아 많은 교통사고를 초래하고 있다. 한국교통연구원 교통사고분석시스템에 따르면, 2017년부터 2021년까지 5년간의 서리/결빙으로 인한 교통사고 사망자는 122명, 적설로 인한 교통사고 사망자는 40명으로, 블랙아이스는 적설에 비해 위험성이 높은 것으로 나타난다. 과거의 다양한 연구에서 블랙아이스 생성조건을 기압과 한기 축적등의 조건에서 예측해왔지만, 이러한 기상학적 모델은 봄철 해빙기의 일교차로 인한 눈의 해동과 재냉각과 같은 다양한 기상 조건에서의 블랙아이스 탐지가 어렵다는 한계가 있어 최근에는 이미지 판별과 딥러닝모델(YOLO 등)을 기반으로 한 센서가 제시되고 있다. 그러나, 이러한 방법은 충분한 컴퓨팅 자원이 뒷받침되어야 하며, 블랙아이스 탐지까지 걸리는 속도가 빠르지 못한 편으로, 블랙아이스 초입 구간에서의 제동에 취약하다는 잠재적인 약점을 가지고 있다. 그러므로 본 연구에서는 블랙아이스의 주 원인인 서리나 어는비가 발생하기 위해서 주변 공기가 이슬점 온도 이하, 노면온도와 이슬점이 어는점보다 낮아야 함을 이용, IoT 센서 모듈을 통해 Magnus 방정식으로 계산한 이슬점 온도와 노면 온도를 사용하는 이동식 블랙아이스 추정 장치를 제시한다. 본 장치는 대기압, 온도, 습도로부터 계산된 이슬점 온도와 노면 온도를 통한 서리발생 가능성과 대기 온도, 노면 온도를 통해 어는비의 발생환경 여부를 계산한다. 본 연구 결과를 통해 블랙아이스 추정과 기상정보 생산을 동시에 가능케 하며, 추정 결과를 통합 수집서버에 전송함으로서 운전자에게 전방 블랙아이스 위험 구간을 조기에 전달하는 시스템과 이를 관리하기 위한 인프라를 구축하여 운전 시 결빙 미끄러짐 사고를 저감하고자 한다.

  • PDF

A Study of AI-based Monitoring Techniques for Land-based Debris in Stream (AI기반 하천 부유쓰레기 모니터링 기술 연구)

  • Kyungsu Lee;Haein Yoon;Jonghwa Won;Sang Hwa Jung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.137-137
    • /
    • 2023
  • 해양쓰레기는 해안의 심미적 가치 저하뿐만 아니라 생태계 파괴, 유령 어업에 따른 수산업 피해 등의 사회적·환경적 문제를 발생시키며, 그중 70% 이상은 육상 기인으로 플라스틱 및 기타 쓰레기가 주를 이루는 해외와 달리 국내의 경우 다량의 초목류를 포함하고 있다. 다양한 부유쓰레기에 대한 기존의 해양쓰레기량 추정의 한계와 하천·하구 쓰레기 수거의 효율화를 위해 해양으로 유입되는 부유쓰레기 방지를 위한 실효성 있는 대책 수립이 필요한 실정이다. 본 연구는 해양 유입 전 하천의 차단시설에 차집된 부유쓰레기의 수거 효율화 및 지속가능한 해양쓰레기 데이터 구축을 위해 AI기반의 기술을 통해 부유쓰레기 성상 분석 기법(Object Detection)과 차집량 분석 기법(Semantic Segmentation)을 활용하였다. 실제와 유사한 데이터 수집을 위해 다양한 하천 환경(정수조, 소하천, 급경사수로)에 대해 탁도(녹조, 유사), 광량, 쓰레기형상, 초목류 함량, 날씨(소하천), 유속(급경사수로) 등의 실험조건에 대하여 해양쓰레기 분류 기준 및 통계를 바탕으로 부유쓰레기 종류 선정하여 학습을 위한 데이터를 수집하였다. 학습 목적에 따라 구분하여 라벨링(Bounding box, Polygon)을 수행하고, 각 분석 기법별 전이학습을 통해 Phase 1(정수조), Phase 2(소하천), Phase 3(급경사수로) 순서로 모델을 고도화하였다. 성상 분석을 위해 YOLO v4를 활용하여 Train, Test DataSet(9:1)을 구성하고 학습 및 평가는 Iteration마다의 mAP, loss 값을 통해 비교하였으며, 학습 Phase에 따라 모델 고도화로 Test Set의 mAP 값이 성상별로 높아짐을 확인하였으며, 차집량 분석을 위해 Unet을 활용하여 Train, Test, Validation DataSet(8.5:1:0.5)을 구성하고 epoch별 IoU(intersection over Union), F1-score, loss 값을 비교하여 정성적, 정량적 평가 모두 Phase 3에서 가장 높은 성능을 확인하였다. 향후 하천 환경에서의 다양한 영양인자별 분석을 통해 주요 영향인자 도출 및 Hyper Parameter 최적화를 통한 모델 고도화로 인해 활용성이 높아질 것으로 판단된다.

  • PDF

Object Part Detection-based Manipulation with an Anthropomorphic Robot Hand Via Human Demonstration Augmented Deep Reinforcement Learning (행동 복제 강화학습 및 딥러닝 사물 부분 검출 기술에 기반한 사람형 로봇손의 사물 조작)

  • Oh, Ji Heon;Ryu, Ga Hyun;Park, Na Hyeon;Anazco, Edwin Valarezo;Lopez, Patricio Rivera;Won, Da Seul;Jeong, Jin Gyun;Chang, Yun Jung;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.854-857
    • /
    • 2020
  • 최근 사람형(Anthropomorphic)로봇손의 사물조작 지능을 개발하기 위하여 행동복제(Behavior Cloning) Deep Reinforcement Learning(DRL) 연구가 진행중이다. 자유도(Degree of Freedom, DOF)가 높은 사람형 로봇손의 학습 문제점을 개선하기 위하여, 행동 복제를 통한 Human Demonstration Augmented(DA)강화 학습을 통하여 사람처럼 사물을 조작하는 지능을 학습시킬 수 있다. 그러나 사물 조작에 있어, 의미 있는 파지를 위해서는 사물의 특정 부위를 인식하고 파지하는 방법이 필수적이다. 본 연구에서는 딥러닝 YOLO기술을 적용하여 사물의 특정 부위를 인식하고, DA-DRL을 적용하여, 사물의 특정 부분을 파지하는 딥러닝 학습 기술을 제안하고, 2 종 사물(망치 및 칼)의 손잡이 부분을 인식하고 파지하여 검증한다. 본 연구에서 제안하는 학습방법은 사람과 상호작용하거나 도구를 용도에 맞게 사용해야하는 분야에서 유용할 것이다.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.