• Title/Summary/Keyword: 딥러닝 기반 물체 인식

Search Result 45, Processing Time 0.02 seconds

Development of Deep Learning Structure for Defective Pixel Detection of Next-Generation Smart LED Display Board using Imaging Device (영상장치를 이용한 차세대 스마트 LED 전광판의 불량픽셀 검출을 위한 딥러닝 구조 개발)

  • Sun-Gu Lee;Tae-Yoon Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.345-349
    • /
    • 2023
  • In this paper, we propose a study on the development of deep learning structure for defective pixel detection of next-generation smart LED display board using imaging device. In this research, a technique utilizing imaging devices and deep learning is introduced to automatically detect defects in outdoor LED billboards. Through this approach, the effective management of LED billboards and the resolution of various errors and issues are aimed. The research process consists of three stages. Firstly, the planarized image data of the billboard is processed through calibration to completely remove the background and undergo necessary preprocessing to generate a training dataset. Secondly, the generated dataset is employed to train an object recognition network. This network is composed of a Backbone and a Head. The Backbone employs CSP-Darknet to extract feature maps, while the Head utilizes extracted feature maps as the basis for object detection. Throughout this process, the network is adjusted to align the Confidence score and Intersection over Union (IoU) error, sustaining continuous learning. In the third stage, the created model is employed to automatically detect defective pixels on actual outdoor LED billboards. The proposed method, applied in this paper, yielded results from accredited measurement experiments that achieved 100% detection of defective pixels on real LED billboards. This confirms the improved efficiency in managing and maintaining LED billboards. Such research findings are anticipated to bring about a revolutionary advancement in the management of LED billboards.

DECODE: A Novel Method of DEep CNN-based Object DEtection using Chirps Emission and Echo Signals in Indoor Environment (실내 환경에서 Chirp Emission과 Echo Signal을 이용한 심층신경망 기반 객체 감지 기법)

  • Nam, Hyunsoo;Jeong, Jongpil
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.59-66
    • /
    • 2021
  • Humans mainly recognize surrounding objects using visual and auditory information among the five senses (sight, hearing, smell, touch, taste). Major research related to the latest object recognition mainly focuses on analysis using image sensor information. In this paper, after emitting various chirp audio signals into the observation space, collecting echoes through a 2-channel receiving sensor, converting them into spectral images, an object recognition experiment in 3D space was conducted using an image learning algorithm based on deep learning. Through this experiment, the experiment was conducted in a situation where there is noise and echo generated in a general indoor environment, not in the ideal condition of an anechoic room, and the object recognition through echo was able to estimate the position of the object with 83% accuracy. In addition, it was possible to obtain visual information through sound through learning of 3D sound by mapping the inference result to the observation space and the 3D sound spatial signal and outputting it as sound. This means that the use of various echo information along with image information is required for object recognition research, and it is thought that this technology can be used for augmented reality through 3D sound.

Detection and Classification for Low-altitude Micro Drone with MFCC and CNN (MFCC와 CNN을 이용한 저고도 초소형 무인기 탐지 및 분류에 대한 연구)

  • Shin, Kyeongsik;Yoo, Sinwoo;Oh, Hyukjun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.3
    • /
    • pp.364-370
    • /
    • 2020
  • This paper is related to detection and classification for micro-sized aircraft that flies at low-altitude. The deep-learning based method using sounds coming from the micro-sized aircraft is proposed to detect and identify them efficiently. We use MFCC as sound features and CNN as a detector and classifier. We've proved that each micro-drones have their own distinguishable MFCC feature and confirmed that we can apply CNN as a detector and classifier even though drone sound has time-related sequence. Typically many papers deal with RNN for time-related features, but we prove that if the number of frame in the MFCC features are enough to contain the time-related information, we can classify those features with CNN. With this approach, we've achieved high detection and classification ratio with low-computation power at the same time using the data set which consists of four different drone sounds. So, this paper presents the simple and effecive method of detection and classification method for micro-sized aircraft.

Metal Surface Defect Detection and Classification using EfficientNetV2 and YOLOv5 (EfficientNetV2 및 YOLOv5를 사용한 금속 표면 결함 검출 및 분류)

  • Alibek, Esanov;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.577-586
    • /
    • 2022
  • Detection and classification of steel surface defects are critical for product quality control in the steel industry. However, due to its low accuracy and slow speed, the traditional approach cannot be effectively used in a production line. The current, widely used algorithm (based on deep learning) has an accuracy problem, and there are still rooms for development. This paper proposes a method of steel surface defect detection combining EfficientNetV2 for image classification and YOLOv5 as an object detector. Shorter training time and high accuracy are advantages of this model. Firstly, the image input into EfficientNetV2 model classifies defect classes and predicts probability of having defects. If the probability of having a defect is less than 0.25, the algorithm directly recognizes that the sample has no defects. Otherwise, the samples are further input into YOLOv5 to accomplish the defect detection process on the metal surface. Experiments show that proposed model has good performance on the NEU dataset with an accuracy of 98.3%. Simultaneously, the average training speed is shorter than other models.

A Study on the Compensation Methods of Object Recognition Errors for Using Intelligent Recognition Model in Sports Games (스포츠 경기에서 지능인식모델을 이용하기 위한 대상체 인식오류 보상방법에 관한 연구)

  • Han, Junsu;Kim, Jongwon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.537-542
    • /
    • 2021
  • This paper improves the possibility of recognizing fast-moving objects through the YOLO (You Only Look Once) deep learning recognition model in an application environment for object recognition in images. The purpose was to study the method of collecting semantic data through processing. In the recognition model, the moving object recognition error was identified as unrecognized because of the difference between the frame rate of the camera and the moving speed of the object and a misrecognition due to the existence of a similar object in an environment adjacent to the object. To minimize the recognition errors by compensating for errors, such as unrecognized and misrecognized objects through the proposed data collection method, and applying vision processing technology for the causes of errors that may occur in images acquired for sports (tennis games) that can represent real similar environments. The effectiveness of effective secondary data collection was improved by research on methods and processing structures. Therefore, by applying the data collection method proposed in this study, ordinary people can collect and manage data to improve their health and athletic performance in the sports and health industry through the simple shooting of a smart-phone camera.