• Title/Summary/Keyword: YOLOV2

Search Result 89, Processing Time 0.022 seconds

Performance Evaluation of Object Detection Deep Learning Model for Paralichthys olivaceus Disease Symptoms Classification (넙치 질병 증상 분류를 위한 객체 탐지 딥러닝 모델 성능 평가)

  • Kyung won Cho;Ran Baik;Jong Ho Jeong;Chan Jin Kim;Han Suk Choi;Seok Won Jung;Hvun Seung Son
    • Smart Media Journal
    • /
    • v.12 no.10
    • /
    • pp.71-84
    • /
    • 2023
  • Paralichthys olivaceus accounts for a large proportion, accounting for more than half of Korea's aquaculture industry. However, about 25-30% of the total breeding volume throughout the year occurs due to diseases, which has a very bad impact on the economic feasibility of fish farms. For the economic growth of Paralichthys olivaceus farms, it is necessary to quickly and accurately diagnose disease symptoms by automating the diagnosis of Paralichthys olivaceus diseases. In this study, we create training data using innovative data collection methods, refining data algorithms, and techniques for partitioning dataset, and compare the Paralichthys olivaceus disease symptom detection performance of four object detection deep learning models(such as YOLOv8, Swin, Vitdet, MvitV2). The experimental findings indicate that the YOLOv8 model demonstrates superiority in terms of average detection rate (mAP) and Estimated Time of Arrival (ETA). If the performance of the AI model proposed in this study is verified, Paralichthys olivaceus farms can diagnose disease symptoms in real time, and it is expected that the productivity of the farm will be greatly improved by rapid preventive measures according to the diagnosis results.

Transformer and Spatial Pyramid Pooling based YOLO network for Object Detection (객체 검출을 위한 트랜스포머와 공간 피라미드 풀링 기반의 YOLO 네트워크)

  • Kwon, Oh-Jun;Jeong, Je-Chang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.113-116
    • /
    • 2021
  • 일반적으로 딥러닝 기반의 객체 검출(Object Detection)기법은 합성곱 신경망(Convolutional Neural Network, CNN)을 통해 입력된 영상의 특징(Feature)을 추출하여 이를 통해 객체 검출을 수행한다. 최근 자연어 처리 분야에서 획기적인 성능을 보인 트랜스포머(Transformer)가 영상 분류, 객체 검출과 같은 컴퓨터 비전 작업을 수행하는데 있어 경쟁력이 있음이 드러나고 있다. 본 논문에서는 YOLOv4-CSP의 CSP 블록을 개선한 one-stage 방식의 객체 검출 네트워크를 제안한다. 개선된 CSP 블록은 트랜스포머(Transformer)의 멀티 헤드 어텐션(Multi-Head Attention)과 CSP 형태의 공간 피라미드 풀링(Spatial Pyramid Pooling, SPP) 연산을 기반으로 네트워크의 Backbone과 Neck에서의 feature 학습을 돕는다. 본 실험은 MSCOCO test-dev2017 데이터 셋으로 평가하였으며 제안하는 네트워크는 YOLOv4-CSP의 경량화 모델인 YOLOv4s-mish에 대하여 평균 정밀도(Average Precision, AP)기준 2.7% 향상된 검출 정확도를 보인다.

  • PDF

Detection of User Behavior Using Real-Time User Joints and YOLOv3 (실시간 사용자 관절과 YOLOv3를 이용한 사용자 행동 검출)

  • Oh, Ye-Jun;Kim, Sang-Joon;Choi, Hee-Jo;Park, Goo-Man
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.228-231
    • /
    • 2021
  • 인물의 행동 및 이동을 인식하는 것은 다양한 분야에서 활용될 수 있다. 사람의 행동을 파악하여 니즈를 예상하고 맞춤형 콘텐츠를 제공하거나 행동을 예측하여 범죄나 폭력을 예방하는 등 여러 방면으로 활용 가능하다. 그러나 이동과 현재 위치 정보만으로 인물의 행동을 예측하기에는 한계가 있다. 본 논문에서는 실시간으로 사람의 이동과 행동을 인식하기 위해 Kinect v2가 제공하는 관절 정보와 YOLOv3를 이용하여 실시간으로 사람의 행동을 인식하는 시스템을 제작하였다.

  • PDF

Development of a Deep Learning Algorithm for Small Object Detection in Real-Time (실시간 기반 매우 작은 객체 탐지를 위한 딥러닝 알고리즘 개발)

  • Wooseong Yeo;Meeyoung Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.4_2
    • /
    • pp.1001-1007
    • /
    • 2024
  • Recent deep learning algorithms for object detection in real-time play a crucial role in various applications such as autonomous driving, traffic monitoring, health care, and water quality monitoring. The size of small objects, in particular, significantly impacts the accuracy of detection models. However, data containing small objects can lead to underfitting issues in models. Therefore, this study developed a deep learning model capable of quickly detecting small objects to provide more accurate predictions. The RE-SOD (Residual block based Small Object Detector) developed in this research enhances the detection performance for small objects by using RGB separation preprocessing and residual blocks. The model achieved an accuracy of 1.0 in image classification and an mAP50-95 score of 0.944 in object detection. The performance of this model was validated by comparing it with real-time detection models such as YOLOv5, YOLOv7, and YOLOv8.

Robust Motorbike License Plate Detection and Recognition using Image Warping based on YOLOv2 (YOLOv2 기반의 영상 워핑을 이용한 강인한 오토바이 번호판 검출 및 인식)

  • Dang, Xuan Truong;Kim, Eung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.17-20
    • /
    • 2019
  • 번호판 자동인식 (ALPR: Automatic License Plate Recognition)은 지능형 교통시스템 및 비디오 감시 시스템 등 많은 응용 분야에서 필요한 기술이다. 대부분의 연구는 자동차를 대상으로 번호판 감지 및 인식을 연구하였고, 오토바이를 대상으로 번호판 감지 및 인식은 매우 적은 편이다. 자동차의 경우 번호판이 차량의 전방 또는 후방 중앙에 위치하며 번호판의 뒷배경은 주로 단색으로 덜 복잡한 편이다. 그러나 오토바이의 경우 킥 스탠드를 이용하여 세우기 때문에 주차할 때 오토바이는 다양한 각도로 기울어져 있으므로 번호판의 글자 및 숫자 인식하는 과정이 훨씬 더 복잡하다. 본 논문에서는 다양한 각도로 주차된 오토바이 데이트세트에 대하여 번호판의 문자 인식 정확도를 높이기 위하여 2-스테이지 YOLOv2 알고리즘을 사용하여 오토바이 영역을 선 검출 후 번호판 영역을 검지한다. 인식률을 높이기 위해 앵커박스의 사이즈와 개수를 오토바이 특성에 맞추어 조절하였다. 그 후 기울어진 번호판을 검출한 후 영상 워핑(Image Warping) 알고리즘을 적용하였다. 모의실험 결과, 기존 방식의 인식률이 47,74%에 비해 제안된 방식은 80.23%의 번호판의 인식률을 얻었다. 제안된 방법은 전체적으로 오토바이 번호판 특성에 맞는 앵커박스와 이미지 워핑을 통해서 다양한 기울기의 오토바이 번호판 문자 인식을 높일 수 있었다.

  • PDF

Recognition of dog's front face using deep learning and machine learning (딥러닝 및 기계학습 활용 반려견 얼굴 정면판별 방법)

  • Kim, Jong-Bok;Jang, Dong-Hwa;Yang, Kayoung;Kwon, Kyeong-Seok;Kim, Jung-Kon;Lee, Joon-Whoan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.1-9
    • /
    • 2020
  • As pet dogs rapidly increase in number, abandoned and lost dogs are also increasing in number. In Korea, animal registration has been in force since 2014, but the registration rate is not high owing to safety and effectiveness issues. Biometrics is attracting attention as an alternative. In order to increase the recognition rate from biometrics, it is necessary to collect biometric images in the same form as much as possible-from the face. This paper proposes a method to determine whether a dog is facing front or not in a real-time video. The proposed method detects the dog's eyes and nose using deep learning, and extracts five types of directional face information through the relative size and position of the detected face. Then, a machine learning classifier determines whether the dog is facing front or not. We used 2,000 dog images for learning, verification, and testing. YOLOv3 and YOLOv4 were used to detect the eyes and nose, and Multi-layer Perceptron (MLP), Random Forest (RF), and the Support Vector Machine (SVM) were used as classifiers. When YOLOv4 and the RF classifier were used with all five types of the proposed face orientation information, the face recognition rate was best, at 95.25%, and we found that real-time processing is possible.

Comparison of Deep Learning Based Pose Detection Models to Detect Fall of Workers in Underground Utility Tunnels (딥러닝 자세 추정 모델을 이용한 지하공동구 다중 작업자 낙상 검출 모델 비교)

  • Jeongsoo Kim
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.2
    • /
    • pp.302-314
    • /
    • 2024
  • Purpose: This study proposes a fall detection model based on a top-down deep learning pose estimation model to automatically determine falls of multiple workers in an underground utility tunnel, and evaluates the performance of the proposed model. Method: A model is presented that combines fall discrimination rules with the results inferred from YOLOv8-pose, one of the top-down pose estimation models, and metrics of the model are evaluated for images of standing and falling two or fewer workers in the tunnel. The same process is also conducted for a bottom-up type of pose estimation model (OpenPose). In addition, due to dependency of the falling interference of the models on worker detection by YOLOv8-pose and OpenPose, metrics of the models for fall was not only investigated, but also for person. Result: For worker detection, both YOLOv8-pose and OpenPose models have F1-score of 0.88 and 0.71, respectively. However, for fall detection, the metrics were deteriorated to 0.71 and 0.23. The results of the OpenPose based model were due to partially detected worker body, and detected workers but fail to part them correctly. Conclusion: Use of top-down type of pose estimation models would be more effective way to detect fall of workers in the underground utility tunnel, with respect to joint recognition and partition between workers.

Towards Efficient Aquaculture Monitoring: Ground-Based Camera Implementation for Real-Time Fish Detection and Tracking with YOLOv7 and SORT (효율적인 양식 모니터링을 향하여: YOLOv7 및 SORT를 사용한 실시간 물고기 감지 및 추적을 위한 지상 기반 카메라 구현)

  • TaeKyoung Roh;Sang-Hyun Ha;KiHwan Kim;Young-Jin Kang;Seok Chan Jeong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.73-82
    • /
    • 2023
  • With 78% of current fisheries workers being elderly, there's a pressing need to address labor shortages. Consequently, active research on smart aquaculture technologies, centered on object detection and tracking algorithms, is underway. These technologies allow for fish size analysis and behavior pattern forecasting, facilitating the development of real-time monitoring and automated systems. Our study utilized video data from cameras outside aquaculture facilities and implemented fish detection and tracking algorithms. We aimed to tackle high maintenance costs due to underwater conditions and camera corrosion from ammonia and pH levels. We evaluated the performance of a real-time system using YOLOv7 for fish detection and the SORT algorithm for movement tracking. YOLOv7 results demonstrated a trade-off between Recall and Precision, minimizing false detections from lighting, water currents, and shadows. Effective tracking was ascertained through re-identification. This research holds promise for enhancing smart aquaculture's operational efficiency and improving fishery facility management.

Detection of Active Fire Objects from Drone Images Using YOLOv7x Model (드론영상과 YOLOv7x 모델을 이용한 활성산불 객체탐지)

  • Park, Ganghyun;Kang, Jonggu;Choi, Soyeon;Youn, Youjeong;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1737-1741
    • /
    • 2022
  • Active fire monitoring using high-resolution drone images and deep learning technologies is now an initial stage and requires various approaches for research and development. This letter examined the detection of active fire objects using You Look Only Once Version 7 (YOLOv7), a state-of-the-art (SOTA) model that has rarely been used in fire detection with drone images. Our experiments showed a better performance than the previous works in terms of multiple quantitative measures. The proposed method can be applied to continuous monitoring of wide areas, with an integration of additional development of new technologies.

Computer Vision-Based Car Accident Detection using YOLOv8 (YOLO v8을 활용한 컴퓨터 비전 기반 교통사고 탐지)

  • Marwa Chacha Andrea;Choong Kwon Lee;Yang Sok Kim;Mi Jin Noh;Sang Il Moon;Jae Ho Shin
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.1
    • /
    • pp.91-105
    • /
    • 2024
  • Car accidents occur as a result of collisions between vehicles, leading to both vehicle damage and personal and material losses. This study developed a vehicle accident detection model based on 2,550 image frames extracted from car accident videos uploaded to YouTube, captured by CCTV. To preprocess the data, bounding boxes were annotated using roboflow.com, and the dataset was augmented by flipping images at various angles. The You Only Look Once version 8 (YOLOv8) model was employed for training, achieving an average accuracy of 0.954 in accident detection. The proposed model holds practical significance by facilitating prompt alarm transmission in emergency situations. Furthermore, it contributes to the research on developing an effective and efficient mechanism for vehicle accident detection, which can be utilized on devices like smartphones. Future research aims to refine the detection capabilities by integrating additional data including sound.