• Title/Summary/Keyword: yolo

Search Result 398, Processing Time 0.027 seconds

Assessment of the Object Detection Ability of Interproximal Caries on Primary Teeth in Periapical Radiographs Using Deep Learning Algorithms (유치의 치근단 방사선 사진에서 딥 러닝 알고리즘을 이용한 모델의 인접면 우식증 객체 탐지 능력의 평가)

  • Hongju Jeon;Seonmi Kim;Namki Choi
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.3
    • /
    • pp.263-276
    • /
    • 2023
  • The purpose of this study was to evaluate the performance of a model using You Only Look Once (YOLO) for object detection of proximal caries in periapical radiographs of children. A total of 2016 periapical radiographs in primary dentition were selected from the M6 database as a learning material group, of which 1143 were labeled as proximal caries by an experienced dentist using an annotation tool. After converting the annotations into a training dataset, YOLO was trained on the dataset using a single convolutional neural network (CNN) model. Accuracy, recall, specificity, precision, negative predictive value (NPV), F1-score, Precision-Recall curve, and AP (area under curve) were calculated for evaluation of the object detection model's performance in the 187 test datasets. The results showed that the CNN-based object detection model performed well in detecting proximal caries, with a diagnostic accuracy of 0.95, a recall of 0.94, a specificity of 0.97, a precision of 0.82, a NPV of 0.96, and an F1-score of 0.81. The AP was 0.83. This model could be a valuable tool for dentists in detecting carious lesions in periapical radiographs.

Development of surface detection model for dried semi-finished product of Kimbukak using deep learning (딥러닝 기반 김부각 건조 반제품 표면 검출 모델 개발)

  • Tae Hyong Kim;Ki Hyun Kwon;Ah-Na Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.205-212
    • /
    • 2024
  • This study developed a deep learning model that distinguishes the front (with garnish) and the back (without garnish) surface of the dried semi-finished product (dried bukak) for screening operation before transfter the dried bukak to oil heater using robot's vacuum gripper. For deep learning model training and verification, RGB images for the front and back surfaces of 400 dry bukak that treated by data preproccessing were obtained. YOLO-v5 was used as a base structure of deep learning model. The area, surface information labeling, and data augmentation techniques were applied from the acquired image. Parameters including mAP, mIoU, accumulation, recall, decision, and F1-score were selected to evaluate the performance of the developed YOLO-v5 deep learning model-based surface detection model. The mAP and mIoU on the front surface were 0.98 and 0.96, respectively, and on the back surface, they were 1.00 and 0.95, respectively. The results of binary classification for the two front and back classes were average 98.5%, recall 98.3%, decision 98.6%, and F1-score 98.4%. As a result, the developed model can classify the surface information of the dried bukak using RGB images, and it can be used to develop a robot-automated system for the surface detection process of the dried bukak before deep frying.

Implementation and Verification of Deep Learning-based Automatic Object Tracking and Handy Motion Control Drone System (심층학습 기반의 자동 객체 추적 및 핸디 모션 제어 드론 시스템 구현 및 검증)

  • Kim, Youngsoo;Lee, Junbeom;Lee, Chanyoung;Jeon, Hyeri;Kim, Seungpil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.163-169
    • /
    • 2021
  • In this paper, we implemented a deep learning-based automatic object tracking and handy motion control drone system and analyzed the performance of the proposed system. The drone system automatically detects and tracks targets by analyzing images obtained from the drone's camera using deep learning algorithms, consisting of the YOLO, the MobileNet, and the deepSORT. Such deep learning-based detection and tracking algorithms have both higher target detection accuracy and processing speed than the conventional color-based algorithm, the CAMShift. In addition, in order to facilitate the drone control by hand from the ground control station, we classified handy motions and generated flight control commands through motion recognition using the YOLO algorithm. It was confirmed that such a deep learning-based target tracking and drone handy motion control system stably track the target and can easily control the drone.

A Study on the Motion Object Detection Method for Autonomous Driving (자율주행을 위한 동적 객체 인식 방법에 관한 연구)

  • Park, Seung-Jun;Park, Sang-Bae;Kim, Jung-Ha
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.5
    • /
    • pp.547-553
    • /
    • 2021
  • Dynamic object recognition is an important task for autonomous vehicles. Since dynamic objects exhibit a higher collision risk than static objects, our own trajectories should be planned to match the future state of moving elements in the scene. Time information such as optical flow can be used to recognize movement. Existing optical flow calculations are based only on camera sensors and are prone to misunderstanding in low light conditions. In this regard, to improve recognition performance in low-light environments, we applied a normalization filter and a correction function for Gamma Value to the input images. The low light quality improvement algorithm can be applied to confirm the more accurate detection of Object's Bounding Box for the vehicle. It was confirmed that there is an important in object recognition through image prepocessing and deep learning using YOLO.

Accurate Pig Detection for Video Monitoring Environment (비디오 모니터링 환경에서 정확한 돼지 탐지)

  • Ahn, Hanse;Son, Seungwook;Yu, Seunghyun;Suh, Yooil;Son, Junhyung;Lee, Sejun;Chung, Yongwha;Park, Daihee
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.7
    • /
    • pp.890-902
    • /
    • 2021
  • Although the object detection accuracy with still images has been significantly improved with the advance of deep learning techniques, the object detection problem with video data remains as a challenging problem due to the real-time requirement and accuracy drop with occlusion. In this research, we propose a method in pig detection for video monitoring environment. First, we determine a motion, from a video data obtained from a tilted-down-view camera, based on the average size of each pig at each location with the training data, and extract key frames based on the motion information. For each key frame, we then apply YOLO, which is known to have a superior trade-off between accuracy and execution speed among many deep learning-based object detectors, in order to get pig's bounding boxes. Finally, we merge the bounding boxes between consecutive key frames in order to reduce false positive and negative cases. Based on the experiment results with a video data set obtained from a pig farm, we confirmed that the pigs could be detected with an accuracy of 97% at a processing speed of 37fps.

Smart AGV based on Object Recognition and Task Scheduling (객체인식과 작업 스케줄링 기반 스마트 AGV)

  • Lee, Se-Hoon;Bak, Tae-Yeong;Choi, Kyu-Hyun;So, Won-Bin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.251-252
    • /
    • 2019
  • 본 논문에서는 기존의 AGV보다 높은 안전성과 Task Scheduling을 바탕으로 한 효율적인 AGV를 제안하였다. AGV는 객체인식 알고리즘인 YOLO로 다른 AGV를 인식하여 자동으로 피난처로 들어간다. 또한 마커인식 알고리즘인 ar_markers를 이용하여 그 위치가 적재소인지 생산 공정인지를 판단하여 각 마커마다 멈추고 피난처에 해당하는 Marker가 인식되고 다른 AGV가 인식되면 피난처로 들어가는 동작을 한다. 이 모든 로그는 Mobius를 이용해 Spring기반의 웹 홈페이지로 확인할 수 있으며, 작업스케줄 명령 또한 웹 홈페이지에서 내리게 된다. 위 작업스케줄은 외판원, 벨만-포드 알고리즘을 적용한 뒤 강화학습알고리즘 중 하나인 DQN을 이용해 최적 값을 도출해 내고 그 값을 DB에 저장해 AGV가 움직일 수 있도록 한다. 본 논문에서는 YOLO와 Marker 그리고 웹을 사용하는 AGV가 기존의 AGV에 비해 더욱 가볍고 큰 시설이 필요하지 않다는 점에서 우수함을 보인다.

  • PDF

Object Recognition in 360° Streaming Video (360° 스트리밍 영상에서의 객체 인식 연구)

  • Yun, Jeongrok;Chun, Sungkuk;Kim, Hoemin;Kim, Un Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.317-318
    • /
    • 2019
  • 가상/증강현실로 대표되는 공간정보 기반 실감형 콘텐츠에 대한 관심이 증대되면서 객체인식 등의 지능형 공간인지 기술에 대한 연구가 활발히 진행되고 있다. 특히 HMD등의 영상 시각화 장치의 발달 및 5G 통신기술의 출현으로 인해 실시간 대용량 영상정보의 송, 수신 및 가시화 처리 기술의 기반이 구축됨에 따라, $360^{\circ}$ 스트리밍 영상정보 처리와 같은 고자유도 콘텐츠를 위한 관련 연구의 필요성이 증대되고 있다. 하지만 지능형 영상정보 처리의 대표적 연구인 딥 러닝(Deep Learning) 기반 객체 인식 기술의 경우 대부분 일반적인 평면 영상(Planar Image)에 대한 처리를 다루고 있고, 파노라마 영상(Panorama Image) 특히, $360^{\circ}$ 스트리밍 영상 처리를 위한 연구는 미비한 상황이다. 본 논문에서는 딥 러닝을 이용하여 $360^{\circ}$ 스트리밍 영상에서의 객체인식 연구 방법에 대해 서술한다. 이를 위해 $360^{\circ}$ 카메라 영상에서 딥 러닝을 위한 학습 데이터를 획득하고, 실시간 객체 인식이 가능한 YOLO(You Only Look Once)기법을 이용하여 학습을 한다. 실험 결과에서는 학습 데이터를 이용하여 $360^{\circ}$영상에서 객체 인식 결과와, 학습 횟수에 따른 객체 인식에 대한 결과를 보여준다.

  • PDF

Comparison and Verification of Deep Learning Models for Automatic Recognition of Pills (알약 자동 인식을 위한 딥러닝 모델간 비교 및 검증)

  • Yi, GyeongYun;Kim, YoungJae;Kim, SeongTae;Kim, HyoEun;Kim, KwangGi
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.3
    • /
    • pp.349-356
    • /
    • 2019
  • When a prescription change occurs in the hospital depending on a patient's improvement status, pharmacists directly classify manually returned pills which are not taken by a patient. There are hundreds of kinds of pills to classify. Because it is manual, mistakes can occur and which can lead to medical accidents. In this study, we have compared YOLO, Faster R-CNN and RetinaNet to classify and detect pills. The data consisted of 10 classes and used 100 images per class. To evaluate the performance of each model, we used cross-validation. As a result, the YOLO Model had sensitivity of 91.05%, FPs/image of 0.0507. The Faster R-CNN's sensitivity was 99.6% and FPs/image was 0.0089. The RetinaNet showed sensitivity of 98.31% and FPs/image of 0.0119. Faster RCNN showed the best performance among these three models tested. Thus, the most appropriate model for classifying pills among the three models is the Faster R-CNN with the most accurate detection and classification results and a low FP/image.

A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning

  • Montalbo, Francis Jesmar P.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4816-4834
    • /
    • 2020
  • This paper proposes transfer learning and fine-tuning techniques for a deep learning model to detect three distinct brain tumors from Magnetic Resonance Imaging (MRI) scans. In this work, the recent YOLOv4 model trained using a collection of 3064 T1-weighted Contrast-Enhanced (CE)-MRI scans that were pre-processed and labeled for the task. This work trained with the partial 29-layer YOLOv4-Tiny and fine-tuned to work optimally and run efficiently in most platforms with reliable performance. With the help of transfer learning, the model had initial leverage to train faster with pre-trained weights from the COCO dataset, generating a robust set of features required for brain tumor detection. The results yielded the highest mean average precision of 93.14%, a 90.34% precision, 88.58% recall, and 89.45% F1-Score outperforming other previous versions of the YOLO detection models and other studies that used bounding box detections for the same task like Faster R-CNN. As concluded, the YOLOv4-Tiny can work efficiently to detect brain tumors automatically at a rapid phase with the help of proper fine-tuning and transfer learning. This work contributes mainly to assist medical experts in the diagnostic process of brain tumors.

IoT based Wearable Smart Safety Equipment using Image Processing (영상 처리를 이용한 IoT 기반 웨어러블 스마트 안전장비)

  • Hong, Hyungi;Kim, Sang Yul;Park, Jae Wan;Gil, Hyun Bin;Chung, Mokdong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.3
    • /
    • pp.167-175
    • /
    • 2022
  • With the recent expansion of electric kickboards and bicycle sharing services, more and more people use them. In addition, the rapid growth of the delivery business due to the COVID-19 has significantly increased the use of two-wheeled vehicles and personal mobility. As the accident rate increases, the rule related to the two-wheeled vehicles is changed to 'mandatory helmets for kickboards and single-person transportation' and was revised to prevent boarding itself without driver's license. In this paper, we propose a wearable smart safety equipment, called SafetyHelmet, that can keep helmet-wearing duty and lower the accident rate with the communication between helmets and mobile devices. To make this function available, we propose a safe driving assistance function by notifying the driver when an object that interferes with driving such as persons or other vehicles are detected by applying the YOLO v5 object detection algorithm. Therefore it is intended to provide a safer driving assistance by reducing the failure rate to identify dangers while driving single-person transportation.