• Title/Summary/Keyword: 객체 탐지 알고리즘

Search Result 175, Processing Time 0.023 seconds

A study on the development of an automatic detection algorithm for trees suspected of being damaged by forest pests (산림병해충 피해의심목 자동탐지 알고리즘 개발 연구)

  • Hoo-Dong, LEE;Seong-Hee, LEE;Young-Jin, LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.151-162
    • /
    • 2022
  • Recently, the forests in Korea have accumulated damage due to continuous forest disasters, and the need for technologies to monitor forest managements is being issued. The size of the affected area is large terrain, technologies using drones, artificial intelligence, and big data are being studied. In this study, a standard dataset were conducted to develop an algorithm that automatically detects suspicious trees damaged by forest pests using deep learning and drones. Experiments using the YOLO model among object detection algorithm models, the YOLOv4-P7 model showed the highest recall rate of 69.69% and precision of 69.15%. It was confirmed that YOLOv4-P7 should be used as an automatic detection algorithm model for trees suspected of being damaged by forest pests, considering the detection target is an ortho-image with a large image size.

Obstacle Detection and Recognition System for Autonomous Driving Vehicle (자율주행차를 위한 장애물 탐지 및 인식 시스템)

  • Han, Ju-Chan;Koo, Bon-Cheol;Cheoi, Kyung-Joo
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.229-235
    • /
    • 2017
  • In recent years, research has been actively carried out to recognize and recognize objects based on a large amount of data. In this paper, we propose a system that extracts objects that are thought to be obstacles in road driving images and recognizes them by car, man, and motorcycle. The objects were extracted using Optical Flow in consideration of the direction and size of the moving objects. The extracted objects were recognized using Alexnet, one of CNN (Convolutional Neural Network) recognition models. For the experiment, various images on the road were collected and experimented with black box. The result of the experiment showed that the object extraction accuracy was 92% and the object recognition accuracy was 96%.

Abnormal behavior prediction system based on companion animal behavior analysis (반려동물 행동 분석 기반 이상행동 예측 시스템)

  • Shin, Minchan;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.487-490
    • /
    • 2021
  • 최근 반려동물 관련 산업이 증가함에 따라 반려동물의 행동을 분석하는 연구가 진행되고 있다. 이를 바탕으로 본 논문에서는 반려동물 행동 분석을 통한 이상행동 예측 시스템을 제안한다. 이 시스템은 CCTV로부터 반려동물의 영상 데이터를 수집하고, YOLOv4(You Only Look Once version 4)를 통해 반려동물의 객체를 탐지한다. 행동을 분석하기 위해 탐지된 반려동물 객체를 DeepLabCut 딥러닝 알고리즘을 사용하여 관절 좌표 정보를 추출한다. 추출된 관절 좌표와 반려동물의 일반적인 행동을 매칭하여 이상행동을 예측하기 위한 DNN(Deep Neural Networks)의 입력 데이터로써 사용된다. 위 과정을 통해 반려동물의 전체적인 행동을 분석하여 이상행동을 예측한다. 이 시스템을 통해 반려동물의 행동을 분석하고 이상행동을 예측함으로써 반려동물 의료 관련 사업에도 적용될 수 있을 것이다.

Safety helmet wearing detection and notification system for construction site (공사현장 안전모 미착용 감지 및 알림 시스템)

  • Joong-Geun Seok;Mu-gyeong Gong;Min-Seok Kim;Dong-hyeon Heo;Jae-won Koo;Tae-jin Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.291-292
    • /
    • 2024
  • 국내의 산재 사고 사망 비율 중 대부분은 건설업이 차지하고 있으며 사망 원인 중 42.9%는 추락사가 차지하고 있다. 따라서 국내 사고 사망을 예방하기 위해서는 노동자의 생명을 지켜주는 안전 장비의 착용 여부가 중요하다. 본 논문에서는 객체 탐지에 사용되는 YOLO v4와 YOLO v4-TINY 알고리즘과 영상 처리에 사용되는 OpenCV를 이용하여 실시간 영상에서 안전모 미착용 인원을 감지하고 관리자에게 알려주는 시스템을 개발하였다. 이 시스템을 활용하여 건설 현장에서 현장 카메라로 안전모 미착용 인원을 실시간으로 검출하여 경고하므로써 작업자의 안전에 기여할 수 있다.

  • PDF

Development of Object Detection Algorithm Using Laser Sensor for Intelligent Excavation Work (자동화 굴삭기 작업을 위한 레이저 선서의 장애물 탐지 알고리즘 개발)

  • Soh, Ji-Yune;Kim, Min-Woong;Lee, Jun-Bok;Han, Choong-Hee
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2008.11a
    • /
    • pp.364-367
    • /
    • 2008
  • Earthwork is very equipment-intensive task and researches related to automated excavation have been conducted. There is an issue to secure the safety for an automated excavating system. Therefore, this paper focuses on how to improve safety for semi- or fully-automated backhoe excavation. The primary objective of this research is to develop object detection algorithm for automated safety system in excavation work. In order to satisfy the research objective, a diverse sensing technologies are investigated and analysed in terms of functions, durability, and reliability and verified its performance by several tests. The authors developed the objects detecting algorithm for user interface program using laser sensor. The results of this study would be the basis for developing the automated object detection system.

  • PDF

Real Time Hornet Classification System Based on Deep Learning (딥러닝을 이용한 실시간 말벌 분류 시스템)

  • Jeong, Yunju;Lee, Yeung-Hak;Ansari, Israfil;Lee, Cheol-Hee
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1141-1147
    • /
    • 2020
  • The hornet species are so similar in shape that they are difficult for non-experts to classify, and because the size of the objects is small and move fast, it is more difficult to detect and classify the species in real time. In this paper, we developed a system that classifies hornets species in real time based on a deep learning algorithm using a boundary box. In order to minimize the background area included in the bounding box when labeling the training image, we propose a method of selecting only the head and body of the hornet. It also experimentally compares existing boundary box-based object recognition algorithms to find the best algorithms that can detect wasps in real time and classify their species. As a result of the experiment, when the mish function was applied as the activation function of the convolution layer and the hornet images were tested using the YOLOv4 model with the Spatial Attention Module (SAM) applied before the object detection block, the average precision was 97.89% and the average recall was 98.69%.

Performance Improvement of Pedestrian Detection using a GM-PHD Filter (GM-PHD 필터를 이용한 보행자 탐지 성능 향상 방법)

  • Lee, Yeon-Jun;Seo, Seung-Woo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.150-157
    • /
    • 2015
  • Pedestrian detection has largely been researched as one of the important technologies for autonomous driving vehicle and preventing accidents. There are two categories for pedestrian detection, camera-based and LIDAR-based. LIDAR-based methods have the advantage of the wide angle of view and insensitivity of illuminance change while camera-based methods have not. However, there are several problems with 3D LIDAR, such as insufficient resolution to detect distant pedestrians and decrease in detection rate in a complex situation due to segmentation error and occlusion. In this paper, two methods using GM-PHD filter are proposed to improve the poor rates of pedestrian detection algorithms based on 3D LIDAR. First one improves detection performance and resolution of object by automatic accumulation of points in previous frames onto current objects. Second one additionally enhances the detection results by applying the GM-PHD filter which is modified in order to handle the poor situation to classified multi target. A quantitative evaluation with autonomously acquired road environment data shows the proposed methods highly increase the performance of existing pedestrian detection algorithms.

Assessment of the Object Detection Ability of Interproximal Caries on Primary Teeth in Periapical Radiographs Using Deep Learning Algorithms (유치의 치근단 방사선 사진에서 딥 러닝 알고리즘을 이용한 모델의 인접면 우식증 객체 탐지 능력의 평가)

  • Hongju Jeon;Seonmi Kim;Namki Choi
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.50 no.3
    • /
    • pp.263-276
    • /
    • 2023
  • The purpose of this study was to evaluate the performance of a model using You Only Look Once (YOLO) for object detection of proximal caries in periapical radiographs of children. A total of 2016 periapical radiographs in primary dentition were selected from the M6 database as a learning material group, of which 1143 were labeled as proximal caries by an experienced dentist using an annotation tool. After converting the annotations into a training dataset, YOLO was trained on the dataset using a single convolutional neural network (CNN) model. Accuracy, recall, specificity, precision, negative predictive value (NPV), F1-score, Precision-Recall curve, and AP (area under curve) were calculated for evaluation of the object detection model's performance in the 187 test datasets. The results showed that the CNN-based object detection model performed well in detecting proximal caries, with a diagnostic accuracy of 0.95, a recall of 0.94, a specificity of 0.97, a precision of 0.82, a NPV of 0.96, and an F1-score of 0.81. The AP was 0.83. This model could be a valuable tool for dentists in detecting carious lesions in periapical radiographs.

Deep Learning-based Vehicle Anomaly Detection using Road CCTV Data (도로 CCTV 데이터를 활용한 딥러닝 기반 차량 이상 감지)

  • Shin, Dong-Hoon;Baek, Ji-Won;Park, Roy C.;Chung, Kyungyong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.2
    • /
    • pp.1-6
    • /
    • 2021
  • In the modern society, traffic problems are occurring as vehicle ownership increases. In particular, the incidence of highway traffic accidents is low, but the fatality rate is high. Therefore, a technology for detecting an abnormality in a vehicle is being studied. Among them, there is a vehicle anomaly detection technology using deep learning. This detects vehicle abnormalities such as a stopped vehicle due to an accident or engine failure. However, if an abnormality occurs on the road, it is possible to quickly respond to the driver's location. In this study, we propose a deep learning-based vehicle anomaly detection using road CCTV data. The proposed method preprocesses the road CCTV data. The pre-processing uses the background extraction algorithm MOG2 to separate the background and the foreground. The foreground refers to a vehicle with displacement, and a vehicle with an abnormality on the road is judged as a background because there is no displacement. The image that the background is extracted detects an object using YOLOv4. It is determined that the vehicle is abnormal.

Updating Obstacle Information Using Object Detection in Street-View Images (스트리트뷰 영상의 객체탐지를 활용한 보행 장애물 정보 갱신)

  • Park, Seula;Song, Ahram
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.599-607
    • /
    • 2021
  • Street-view images, which are omnidirectional scenes centered on a specific location on the road, can provide various obstacle information for the pedestrians. Pedestrian network data for the navigation services should reflect the up-to-date obstacle information to ensure the mobility of pedestrians, including people with disabilities. In this study, the object detection model was trained for the bollard as a major obstacle in Seoul using street-view images and a deep learning algorithm. Also, a process for updating information about the presence and number of bollards as obstacle properties for the crosswalk node through spatial matching between the detected bollards and the pedestrian nodes was proposed. The missing crosswalk information can also be updated concurrently by the proposed process. The proposed approach is appropriate for crowdsourcing data as the model trained using the street-view images can be applied to photos taken with a smartphone while walking. Through additional training with various obstacles captured in the street-view images, it is expected to enable efficient information update about obstacles on the road.