• Title/Summary/Keyword: YOLOV2

Search Result 89, Processing Time 0.021 seconds

High-Resolution Mapping Techniques for Coastal Debris Using YOLOv8 and Unmanned Aerial Vehicle (YOLOv8과 무인항공기를 활용한 고해상도 해안쓰레기 매핑)

  • Suho Bak;Heung-Min Kim;Youngmin Kim;Inji Lee;Miso Park;Tak-Young Kim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.2
    • /
    • pp.151-166
    • /
    • 2024
  • Coastal debris presents a significant environmental threat globally. This research sought to improve the monitoring methods for coastal debris by employing deep learning and remote sensing technologies. To achieve this, an object detection approach utilizing the You Only Look Once (YOLO)v8 model was implemented to develop a comprehensive image dataset for 11 primary types of coastal debris in our country, proposing a protocol for the real-time detection and analysis of debris. Drone imagery was collected over Sinja Island, situated at the estuary of the Nakdong River, and analyzed using our custom YOLOv8-based analysis program to identify type-specific hotspots of coastal debris. The deployment of these mapping and analysis methodologies is anticipated to be effectively utilized in managing coastal debris.

YOLOv5 based Anomaly Detection for Subway Safety Management Using Dilated Convolution

  • Nusrat Jahan Tahira;Ju-Ryong Park;Seung-Jin Lim;Jang-Sik Park
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.2_1
    • /
    • pp.217-223
    • /
    • 2023
  • With the rapid advancement of technologies, need for different research fields where this technology can be used is also increasing. One of the most researched topic in computer vision is object detection, which has widely been implemented in various fields which include healthcare, video surveillance and education. The main goal of object detection is to identify and categorize all the objects in a target environment. Specifically, methods of object detection consist of a variety of significant techniq ues, such as image processing and patterns recognition. Anomaly detection is a part of object detection, anomalies can be found various scenarios for example crowded places such as subway stations. An abnormal event can be assumed as a variation from the conventional scene. Since the abnormal event does not occur frequently, the distribution of normal and abnormal events is thoroughly imbalanced. In terms of public safety, abnormal events should be avoided and therefore immediate action need to be taken. When abnormal events occur in certain places, real time detection is required to prevent and protect the safety of the people. To solve the above problems, we propose a modified YOLOv5 object detection algorithm by implementing dilated convolutional layers which achieved 97% mAP50 compared to other five different models of YOLOv5. In addition to this, we also created a simple mobile application to avail the abnormal event detection on mobile phones.

A Study on Biomass Estimation Technique of Invertebrate Grazers Using Multi-object Tracking Model Based on Deep Learning (딥러닝 기반 다중 객체 추적 모델을 활용한 조식성 무척추동물 현존량 추정 기법 연구)

  • Bak, Suho;Kim, Heung-Min;Lee, Heeone;Han, Jeong-Ik;Kim, Tak-Young;Lim, Jae-Young;Jang, Seon Woong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.3
    • /
    • pp.237-250
    • /
    • 2022
  • In this study, we propose a method to estimate the biomass of invertebrate grazers from the videos with underwater drones by using a multi-object tracking model based on deep learning. In order to detect invertebrate grazers by classes, we used YOLOv5 (You Only Look Once version 5). For biomass estimation we used DeepSORT (Deep Simple Online and real-time tracking). The performance of each model was evaluated on a workstation with a GPU accelerator. YOLOv5 averaged 0.9 or more mean Average Precision (mAP), and we confirmed it shows about 59 fps at 4 k resolution when using YOLOv5s model and DeepSORT algorithm. Applying the proposed method in the field, there was a tendency to be overestimated by about 28%, but it was confirmed that the level of error was low compared to the biomass estimation using object detection model only. A follow-up study is needed to improve the accuracy for the cases where frame images go out of focus continuously or underwater drones turn rapidly. However,should these issues be improved, it can be utilized in the production of decision support data in the field of invertebrate grazers control and monitoring in the future.

A Research on Cylindrical Pill Bottle Recognition with YOLOv8 and ORB

  • Dae-Hyun Kim;Hyo Hyun Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.13-20
    • /
    • 2024
  • This paper introduces a method for generating model images that can identify specific cylindrical medicine containers in videos and investigates data collection techniques. Previous research had separated object detection from specific object recognition, making it challenging to apply automated image stitching. A significant issue was that the coordinate-based object detection method included extraneous information from outside the object area during the image stitching process. To overcome these challenges, this study applies the newly released YOLOv8 (You Only Look Once) segmentation technique to vertically rotating pill bottles video and employs the ORB (Oriented FAST and Rotated BRIEF) feature matching algorithm to automate model image generation. The research findings demonstrate that applying segmentation techniques improves recognition accuracy when identifying specific pill bottles. The model images created with the feature matching algorithm could accurately identify the specific pill bottles.

Ship Detection from SAR Images Using YOLO: Model Constructions and Accuracy Characteristics According to Polarization (YOLO를 이용한 SAR 영상의 선박 객체 탐지: 편파별 모델 구성과 정확도 특성 분석)

  • Yungyo Im;Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Youngmin Seo;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.997-1008
    • /
    • 2023
  • Ship detection at sea can be performed in various ways. In particular, satellites can provide wide-area surveillance, and Synthetic Aperture Radar (SAR) imagery can be utilized day and night and in all weather conditions. To propose an efficient ship detection method from SAR images, this study aimed to apply the You Only Look Once Version 5 (YOLOv5) model to Sentinel-1 images and to analyze the difference between individual vs. integrated models and the accuracy characteristics by polarization. YOLOv5s, which has fewer and lighter parameters, and YOLOv5x, which has more parameters but higher accuracy, were used for the performance tests (1) by dividing each polarization into HH, HV, VH, and VV, and (2) by using images from all polarizations. All four experiments showed very similar and high accuracy of 0.977 ≤ AP@0.5 ≤ 0.998. This result suggests that the polarization integration model using lightweight YOLO models can be the most effective in terms of real-time system deployment. 19,582 images were used in this experiment. However, if other SAR images,such as Capella and ICEYE, are included in addition to Sentinel-1 images, a more flexible and accurate model for ship detection can be built.

Pine Wilt Disease Detection Based on Deep Learning Using an Unmanned Aerial Vehicle (무인항공기를 이용한 딥러닝 기반의 소나무재선충병 감염목 탐지)

  • Lim, Eon Taek;Do, Myung Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.3
    • /
    • pp.317-325
    • /
    • 2021
  • Pine wilt disease first appeared in Busan in 1998; it is a serious disease that causes enormous damage to pine trees. The Korean government enacted a special law on the control of pine wilt disease in 2005, which controls and prohibits the movement of pine trees in affected areas. However, existing forecasting and control methods have physical and economic challenges in reducing pine wilt disease that occurs simultaneously and radically in mountainous terrain. In this study, the authors present the use of a deep learning object recognition and prediction method based on visual materials using an unmanned aerial vehicle (UAV) to effectively detect trees suspected of being infected with pine wilt disease. In order to observe pine wilt disease, an orthomosaic was produced using image data acquired through aerial shots. As a result, 198 damaged trees were identified, while 84 damaged trees were identified in field surveys that excluded areas with inaccessible steep slopes and cliffs. Analysis using image segmentation (SegNet) and image detection (YOLOv2) obtained a performance value of 0.57 and 0.77, respectively.

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • v.52 no.2
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

Implementation of Deep Learning-based Label Inspection System Applicable to Edge Computing Environments (엣지 컴퓨팅 환경에서 적용 가능한 딥러닝 기반 라벨 검사 시스템 구현)

  • Bae, Ju-Won;Han, Byung-Gil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.2
    • /
    • pp.77-83
    • /
    • 2022
  • In this paper, the two-stage object detection approach is proposed to implement a deep learning-based label inspection system on edge computing environments. Since the label printed on the products during the production process contains important information related to the product, it is significantly to check the label information is correct. The proposed system uses the lightweight deep learning model that able to employ in the low-performance edge computing devices, and the two-stage object detection approach is applied to compensate for the low accuracy relatively. The proposed Two-Stage object detection approach consists of two object detection networks, Label Area Detection Network and Character Detection Network. Label Area Detection Network finds the label area in the product image, and Character Detection Network detects the words in the label area. Using this approach, we can detect characters precise even with a lightweight deep learning models. The SF-YOLO model applied in the proposed system is the YOLO-based lightweight object detection network designed for edge computing devices. This model showed up to 2 times faster processing time and a considerable improvement in accuracy, compared to other YOLO-based lightweight models such as YOLOv3-tiny and YOLOv4-tiny. Also since the amount of computation is low, it can be easily applied in edge computing environments.

A Study on Vehicle License Plate Recognition System through Fake License Plate Generator in YOLOv5 (YOLOv5에서 가상 번호판 생성을 통한 차량 번호판 인식 시스템에 관한 연구)

  • Ha, Sang-Hyun;Jeong, Seok Chan;Jeon, Young-Joon;Jang, Mun-Seok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.699-706
    • /
    • 2021
  • Existing license plate recognition system is used as an optical character recognition method, but a method of using deep learning has been proposed in recent studies because it has problems with image quality and Korean misrecognition. This requires a lot of data collection, but the collection of license plates is not easy to collect due to the problem of the Personal Information Protection Act, and labeling work to designate the location of individual license plates is required, but it also requires a lot of time. Therefore, in this paper, to solve this problem, five types of license plates were created using a virtual Korean license plate generation program according to the notice of the Ministry of Land, Infrastructure and Transport. And the generated license plate is synthesized in the license plate part of collectable vehicle images to construct 10,147 learning data to be used in deep learning. The learning data classifies license plates, Korean, and numbers into individual classes and learn using YOLOv5. Since the proposed method recognizes letters and numbers individually, if the font does not change, it can be recognized even if the license plate standard changes or the number of characters increases. As a result of the experiment, an accuracy of 96.82% was obtained, and it can be applied not only to the learned license plate but also to new types of license plates such as new license plates and eco-friendly license plates.

Structural live load surveys by deep learning

  • Li, Yang;Chen, Jun
    • Smart Structures and Systems
    • /
    • v.30 no.2
    • /
    • pp.145-157
    • /
    • 2022
  • The design of safe and economical structures depends on the reliable live load from load survey. Live load surveys are traditionally conducted by randomly selecting rooms and weighing each item on-site, a method that has problems of low efficiency, high cost, and long cycle time. This paper proposes a deep learning-based method combined with Internet big data to perform live load surveys. The proposed survey method utilizes multi-source heterogeneous data, such as images, voice, and product identification, to obtain the live load without weighing each item through object detection, web crawler, and speech recognition. The indoor objects and face detection models are first developed based on fine-tuning the YOLOv3 algorithm to detect target objects and obtain the number of people in a room, respectively. Each detection model is evaluated using the independent testing set. Then web crawler frameworks with keyword and image retrieval are established to extract the weight information of detected objects from Internet big data. The live load in a room is derived by combining the weight and number of items and people. To verify the feasibility of the proposed survey method, a live load survey is carried out for a meeting room. The results show that, compared with the traditional method of sampling and weighing, the proposed method could perform efficient and convenient live load surveys and represents a new load research paradigm.