• 제목/요약/키워드: single-image detection

검색결과 358건 처리시간 0.023초

단일 영상에서 효과적인 피부색 검출을 위한 2단계 적응적 피부색 모델 (2-Stage Adaptive Skin Color Model for Effective Skin Color Segmentation in a Single Image)

  • 도준형;김근호;김종열
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.193-196
    • /
    • 2009
  • 단일 영상에서 피부색 영역을 추출하기 위해서 기존의 많은 방법들이 하나의 고정된 피부색 모델을 사용한다. 그러나 영상에 특성에 따라 영상에 포함된 피부색의 분포가 다양하기 때문에 이러한 방법을 이용하여 피부색을 검출할 경우 낮은 검출율이나 높은 긍정 오류율이 발생할 수 있다. 따라서 영상의 특징에 따라 적응적으로 피부색 영역을 추출할 수 있는 방법이 필요하다. 이에 본 논문에서는 영상의 특징에 따라 2단계의 과정을 거쳐 피부색 모델을 수정하는 방법으로, 다양한 조명과 환경 조건에서 높은 검출율과 낮은 긍정 오류율을 동시에 가지는 알고리즘을 제안한다.

  • PDF

Physical interpretation of concrete crack images from feature estimation and classification

  • Koh, Eunbyul;Jin, Seung-Seop;Kim, Robin Eunju
    • Smart Structures and Systems
    • /
    • 제30권4호
    • /
    • pp.385-395
    • /
    • 2022
  • Detecting cracks on a concrete structure is crucial for structural maintenance, a crack being an indicator of possible damage. Conventional crack detection methods which include visual inspection and non-destructive equipment, are typically limited to a small region and require time-consuming processes. Recently, to reduce the human intervention in the inspections, various researchers have sought computer vision-based crack analyses: One class is filter-based methods, which effectively transforms the image to detect crack edges. The other class is using deep-learning algorithms. For example, convolutional neural networks have shown high precision in identifying cracks in an image. However, when the objective is to classify not only the existence of crack but also the types of cracks, only a few studies have been reported, limiting their practical use. Thus, the presented study develops an image processing procedure that detects cracks and classifies crack types; whether the image contains a crazing-type, single crack, or multiple cracks. The properties and steps in the algorithm have been developed using field-obtained images. Subsequently, the algorithm is validated from additional 227 images obtained from an open database. For test datasets, the proposed algorithm showed accuracy of 92.8% in average. In summary, the developed algorithm can precisely classify crazing-type images, while some single crack images may misclassify into multiple cracks, yielding conservative results. As a result, the successful results of the presented study show potentials of using vision-based technologies for providing crack information with reduced human intervention.

Ensemble of Convolution Neural Networks for Driver Smartphone Usage Detection Using Multiple Cameras

  • Zhang, Ziyi;Kang, Bo-Yeong
    • Journal of information and communication convergence engineering
    • /
    • 제18권2호
    • /
    • pp.75-81
    • /
    • 2020
  • Approximately 1.3 million people die from traffic accidents each year, and smartphone usage while driving is one of the main causes of such accidents. Therefore, detection of smartphone usage by drivers has become an important part of distracted driving detection. Previous studies have used single camera-based methods to collect the driver images. However, smartphone usage detection by employing a single camera can be unsuccessful if the driver occludes the phone. In this paper, we present a driver smartphone usage detection system that uses multiple cameras to collect driver images from different perspectives, and then processes these images with ensemble convolutional neural networks. The ensemble method comprises three individual convolutional neural networks with a simple voting system. Each network provides a distinct image perspective and the voting mechanism selects the final classification. Experimental results verified that the proposed method avoided the limitations observed in single camera-based methods, and achieved 98.96% accuracy on our dataset.

단일 영상에서 안개 제거 방법을 이용한 객체 검출 알고리즘 개선 (Enhancement of Object Detection using Haze Removal Approach in Single Image)

  • 안효창;이용환
    • 반도체디스플레이기술학회지
    • /
    • 제17권2호
    • /
    • pp.76-80
    • /
    • 2018
  • In recent years, with the development of automobile technology, smart system technology that assists safe driving has been developed. A camera is installed on the front and rear of the vehicle as well as on the left and right sides to detect and warn of collision risks and hazards. Beyond the technology of simple black-box recording via cameras, we are developing intelligent systems that combine various computer vision technologies. However, most related studies have been developed to optimize performance in laboratory-like environments that do not take environmental factors such as weather into account. In this paper, we propose a method to detect object by restoring visibility in image with degraded image due to weather factors such as fog. First, the image quality degradation such as fog is detected in a single image, and the image quality is improved by restoring using an intermediate value filter. Then, we used an adaptive feature extraction method that removes unnecessary elements such as noise from the improved image and uses it to recognize objects with only the necessary features. In the proposed method, it is shown that more feature points are extracted than the feature points of the region of interest in the improved image.

Deep-Learning Based Real-time Fire Detection Using Object Tracking Algorithm

  • Park, Jonghyuk;Park, Dohyun;Hyun, Donghwan;Na, Youmin;Lee, Soo-Hong
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권1호
    • /
    • pp.1-8
    • /
    • 2022
  • 본 논문에서는 실시간 객체 탐지(Real-time Object Detection)가 가능한 YOLOv4 모델과 DeepSORT 알고리즘을 활용한 객체 추적(Object Tracking) 기술을 활용하여 CCTV 영상 이미지 기반의 화재 탐지 시스템을 제안한다. 화재 탐지 모델은 10800장의 학습용 데이터로부터 학습되었으며 1000장의 별도 테스트 셋을 통해 검증되었다. 이후 DeepSORT 알고리즘을 통해 탐지된 화재 영역을 추적하여 단일 이미지 내의 화재 탐지율과 영상 내에서의 화재 탐지 유지성능을 증가시켰다. 영상 내의 한 프레임 혹은 단일 이미지에 대한 화재 탐지 속도는 장당 0.1초 이내로 실시간 탐지가 가능함을 확인하였으며 본 논문의 AI 화재 탐지 시스템은 기존의 화재 사고 탐지 시스템 보다 안정적이고 빠른 성능을 지니고 있어 화재현장에 적용 시 화재를 조기 발견하여 빠른 대처 및 발화단계에서의 진화가 가능할 것으로 예상된다.

Manhole Cover Detection from Natural Scene Based on Imaging Environment Perception

  • Liu, Haoting;Yan, Beibei;Wang, Wei;Li, Xin;Guo, Zhenhui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권10호
    • /
    • pp.5095-5111
    • /
    • 2019
  • A multi-rotor Unmanned Aerial Vehicle (UAV) system is developed to solve the manhole cover detection problem for the infrastructure maintenance in the suburbs of big city. The visible light sensor is employed to collect the ground image data and a series of image processing and machine learning methods are used to detect the manhole cover. First, the image enhancement technique is employed to improve the imaging effect of visible light camera. An imaging environment perception method is used to increase the computation robustness: the blind Image Quality Evaluation Metrics (IQEMs) are used to percept the imaging environment and select the images which have a high imaging definition for the following computation. Because of its excellent processing effect the adaptive Multiple Scale Retinex (MSR) is used to enhance the imaging quality. Second, the Single Shot multi-box Detector (SSD) method is utilized to identify the manhole cover for its stable processing effect. Third, the spatial coordinate of manhole cover is also estimated from the ground image. The practical applications have verified the outdoor environment adaptability of proposed algorithm and the target detection correctness of proposed system. The detection accuracy can reach 99% and the positioning accuracy is about 0.7 meters.

New Blind Steganalysis Framework Combining Image Retrieval and Outlier Detection

  • Wu, Yunda;Zhang, Tao;Hou, Xiaodan;Xu, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권12호
    • /
    • pp.5643-5656
    • /
    • 2016
  • The detection accuracy of steganalysis depends on many factors, including the embedding algorithm, the payload size, the steganalysis feature space and the properties of the cover source. In practice, the cover source mismatch (CSM) problem has been recognized as the single most important factor negatively affecting the performance. To address this problem, we propose a new framework for blind, universal steganalysis which uses traditional steganalyst features. Firstly, cover images with the same statistical properties are searched from a reference image database as aided samples. The test image and its aided samples form a whole test set. Then, by assuming that most of the aided samples are innocent, we conduct outlier detection on the test set to judge the test image as cover or stego. In this way, the framework has removed the need for training. Hence, it does not suffer from cover source mismatch. Because it performs anomaly detection rather than classification, this method is totally unsupervised. The results in our study show that this framework works superior than one-class support vector machine and the outlier detector without considering the image retrieval process.

Co-saliency Detection Based on Superpixel Matching and Cellular Automata

  • Zhang, Zhaofeng;Wu, Zemin;Jiang, Qingzhu;Du, Lin;Hu, Lei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권5호
    • /
    • pp.2576-2589
    • /
    • 2017
  • Co-saliency detection is a task of detecting same or similar objects in multi-scene, and has been an important preprocessing step for multi-scene image processing. However existing methods lack efficiency to match similar areas from different images. In addition, they are confined to single image detection without a unified framework to calculate co-saliency. In this paper, we propose a novel model called Superpixel Matching-Cellular Automata (SMCA). We use Hausdorff distance adjacent superpixel sets instead of single superpixel since the feature matching accuracy of single superpixel is poor. We further introduce Cellular Automata to exploit the intrinsic relevance of similar regions through interactions with neighbors in multi-scene. Extensive evaluations show that the SMCA model achieves leading performance compared to state-of-the-art methods on both efficiency and accuracy.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Performance Improvement of Classifier by Combining Disjunctive Normal Form features

  • Min, Hyeon-Gyu;Kang, Dong-Joong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제10권4호
    • /
    • pp.50-64
    • /
    • 2018
  • This paper describes a visual object detection approach utilizing ensemble based machine learning. Object detection methods employing 1D features have the benefit of fast calculation speed. However, for real image with complex background, detection accuracy and performance are degraded. In this paper, we propose an ensemble learning algorithm that combines a 1D feature classifier and 2D DNF (Disjunctive Normal Form) classifier to improve the object detection performance in a single input image. Also, to improve the computing efficiency and accuracy, we propose a feature selecting method to reduce the computing time and ensemble algorithm by combining the 1D features and 2D DNF features. In the verification experiments, we selected the Haar-like feature as the 1D image descriptor, and demonstrated the performance of the algorithm on a few datasets such as face and vehicle.