• Title/Summary/Keyword: Visual detection

Search Result 871, Processing Time 0.021 seconds

적응적 이진화를 이용하여 빛의 변화에 강인한 영상거리계를 통한 위치 추정 (Robust Visual Odometry System for Illumination Variations Using Adaptive Thresholding)

  • 황요섭;유호윤;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제22권9호
    • /
    • pp.738-744
    • /
    • 2016
  • In this paper, a robust visual odometry system has been proposed and implemented in an environment with dynamic illumination. Visual odometry is based on stereo images to estimate the distance to an object. It is very difficult to realize a highly accurate and stable estimation because image quality is highly dependent on the illumination, which is a major disadvantage of visual odometry. Therefore, in order to solve the problem of low performance during the feature detection phase that is caused by illumination variations, it is suggested to determine an optimal threshold value in the image binarization and to use an adaptive threshold value for feature detection. A feature point direction and a magnitude of the motion vector that is not uniform are utilized as the features. The performance of feature detection has been improved by the RANSAC algorithm. As a result, the position of a mobile robot has been estimated using the feature points. The experimental results demonstrated that the proposed approach has superior performance against illumination variations.

강구조물 용접이음부 외부결함의 자동검출 알고리즘 (An Image Processing Algorithm for a Visual Weld Defects Detection on Weld Joint in Steel Structure)

  • 서원찬;이동욱
    • 한국강구조학회 논문집
    • /
    • 제11권1호통권38호
    • /
    • pp.1-11
    • /
    • 1999
  • 본 논문에서는 강구조물의 제작 및 시공에서 용접이음부의 고품질을 확보하기 위하여 강구조물 용접이음부 외부결함의 자동검출에 관한 화상처리 알고리즘을 개발한다. 개발 알고리즘은 광학계의 적절한 배치에 의해 얻어지는 4매의 입력화상을 이용하여 기존의 기법에서 검출할 수 없었던 용접이음부 외부결함을 검출할 수 있음을 보인다. 용접 외부결함이 존재하는 시험편을 제작하고 실험을 통하여 개발 알고리즘의 유용성을 확인하였다. 또한 검출된 용접외부결함의 분류 결과를 육안검사 결과와 비교하였다.

  • PDF

지능 영상 감시를 위한 흑백 영상 데이터에서의 효과적인 이동 투영 음영 제거 (An Effective Moving Cast Shadow Removal in Gray Level Video for Intelligent Visual Surveillance)

  • 응웬탄빈;정선태;조성원
    • 한국멀티미디어학회논문지
    • /
    • 제17권4호
    • /
    • pp.420-432
    • /
    • 2014
  • In detection of moving objects from video sequences, an essential process for intelligent visual surveillance, the cast shadows accompanying moving objects are different from background so that they may be easily extracted as foreground object blobs, which causes errors in localization, segmentation, tracking and classification of objects. Most of the previous research results about moving cast shadow detection and removal usually utilize color information about objects and scenes. In this paper, we proposes a novel cast shadow removal method of moving objects in gray level video data for visual surveillance application. The proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the corresponding regions in the background scene. Then, the product of the outcomes of application determines moving object blob pixels from the blob pixels in the foreground mask. The minimal rectangle regions containing all blob pixles classified as moving object pixels are extracted. The proposed method is simple but turns out practically very effective for Adative Gaussian Mixture Model-based object detection of intelligent visual surveillance applications, which is verified through experiments.

Region of Interest Detection Based on Visual Attention and Threshold Segmentation in High Spatial Resolution Remote Sensing Images

  • Zhang, Libao;Li, Hao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권8호
    • /
    • pp.1843-1859
    • /
    • 2013
  • The continuous increase of the spatial resolution of remote sensing images brings great challenge to image analysis and processing. Traditional prior knowledge-based region detection and target recognition algorithms for processing high resolution remote sensing images generally employ a global searching solution, which results in prohibitive computational complexity. In this paper, a more efficient region of interest (ROI) detection algorithm based on visual attention and threshold segmentation (VA-TS) is proposed, wherein a visual attention mechanism is used to eliminate image segmentation and feature detection to the entire image. The input image is subsampled to decrease the amount of data and the discrete moment transform (DMT) feature is extracted to provide a finer description of the edges. The feature maps are combined with weights according to the amount of the "strong points" and the "salient points". A threshold segmentation strategy is employed to obtain more accurate region of interest shape information with the very low computational complexity. Experimental statistics have shown that the proposed algorithm is computational efficient and provide more visually accurate detection results. The calculation time is only about 0.7% of the traditional Itti's model.

Robust Face Detection Based on Knowledge-Directed Specification of Bottom-Up Saliency

  • Lee, Yu-Bu;Lee, Suk-Han
    • ETRI Journal
    • /
    • 제33권4호
    • /
    • pp.600-610
    • /
    • 2011
  • This paper presents a novel approach to face detection by localizing faces as the goal-specific saliencies in a scene, using the framework of selective visual attention of a human with a particular goal in mind. The proposed approach aims at achieving human-like robustness as well as efficiency in face detection under large scene variations. The key is to establish how the specific knowledge relevant to the goal interacts with the bottom-up process of external visual stimuli for saliency detection. We propose a direct incorporation of the goal-related knowledge into the specification and/or modification of the internal process of a general bottom-up saliency detection framework. More specifically, prior knowledge of the human face, such as its size, skin color, and shape, is directly set to the window size and color signature for computing the center of difference, as well as to modify the importance weight, as a means of transforming into a goal-specific saliency detection. The experimental evaluation shows that the proposed method reaches a detection rate of 93.4% with a false positive rate of 7.1%, indicating the robustness against a wide variation of scale and rotation.

Motion Detection Model Based on PCNN

  • Yoshida, Minoru;Tanaka, Masaru;Kurita, Takio
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.273-276
    • /
    • 2002
  • Pulse-Coupled Neural Network (PCNN), which can explain the synchronous burst of neurons in a cat visual cortex, is a fundamental model for the biomimetic vision. The PCNN is a kind of pulse coded neural network models. In order to get deep understanding of the visual information Processing, it is important to simulate the visual system through such biologically plausible neural network model. In this paper, we construct the motion detection model based on the PCNN with the receptive field models of neurons in the lateral geniculate nucleus and the primary visual cortex. Then it is shown that this motion detection model can detect the movements and the direction of motion effectively.

  • PDF

연령증가에 따른 신호탐지능력의 변화 -시.청각을 중심으로-

  • 이용태;신승헌
    • 대한인간공학회:학술대회논문집
    • /
    • 대한인간공학회 1996년도 추계학술대회논문집
    • /
    • pp.206-215
    • /
    • 1996
  • Recently, proportion of the aged becomes greater in Korea like the advanced country as time passed away, and this is treated as one of major social problems. Therfore, we investigated visual/auditory signal detection performance to evaluate vocational aptitude of the middle-and old-aged workers in this study. It was shown that signal detection performance decreased as workers became older, and there was large individual difference in signal detection performance. Since signal detection performance in visual task decreased rapidly and more than that in auditory task, the middle-and old-aged workers can not carry out properly visual inspection and precision task was related with that in auditory task. It can be expected that the parameters used in this study are in good use for evaluating a worker's aptitude.

  • PDF

지능형 화재 학습 및 탐지 시스템 (An Intelligent Fire Leaning and Detection System)

  • 최경주
    • 한국멀티미디어학회논문지
    • /
    • 제18권3호
    • /
    • pp.359-367
    • /
    • 2015
  • In this paper, we propose intelligent fire learning and detection system using hybrid visual attention mechanism of human. Proposed fire learning system generates leaned data by learning process of fire and smoke images. The features used as learning feature are selected among many features which are extracted based on bottom-up visual attention mechanism of human, and these features are modified as learned data by calculating average and standard variation of them. Proposed fire detection system uses learned data which is generated in fire learning system and features of input image to detect fire.

비행로봇의 항공 영상 온라인 학습을 통한 지상로봇 검출 및 추적 (UGR Detection and Tracking in Aerial Images from UFR for Remote Control)

  • 김승훈;정일균
    • 로봇학회논문지
    • /
    • 제10권2호
    • /
    • pp.104-111
    • /
    • 2015
  • In this paper, we proposed visual information to provide a highly maneuverable system for a tele-operator. The visual information image is bird's eye view from UFR(Unmanned Flying Robot) shows around UGR(Unmanned Ground Robot). We need UGV detection and tracking method for UFR following UGR always. The proposed system uses TLD(Tracking Learning Detection) method to rapidly and robustly estimate the motion of the new detected UGR between consecutive frames. The TLD system trains an on-line UGR detector for the tracked UGR. The proposed system uses the extended Kalman filter in order to enhance the performance of the tracker. As a result, we provided the tele-operator with the visual information for convenient control.

품질 검사자의 외관검사 검출력 향상방안에 관한 연구 (A Study on the Improvement of Human Operators' Performance in Detection of External Defects in Visual Inspection)

  • 한성재;함동한
    • 대한안전경영과학회지
    • /
    • 제21권4호
    • /
    • pp.67-74
    • /
    • 2019
  • Visual inspection is regarded as one of the critical activities for quality control in a manufacturing company. it is thus important to improve the performance of detecting a defective part or product. There are three probable working modes for visual inspection: fully automatic (by automatic machines), fully manual (by human operators), and semi-automatic (by collaboration between human operators and automatic machines). Most of the current studies on visual inspection have been focused on the improvement of automatic detection performance by developing a better automatic machine using computer vision technologies. However, there are still a range of situations where human operators should conduct visual inspection with/without automatic machines. In this situation, human operators'performance of visual inspection is significant to the successful quality control. However, visual inspection of components assembled into a mobile camera module belongs to those situations. This study aims to investigate human performance issues in visual inspection of the components, paying more attention to human errors. For this, Abstraction Hierarchy-based work domain modeling method was applied to examine a range of direct or indirect factors related to human errors and their relationships in the visual inspection of the components. Although this study was conducted in the context of manufacturing mobile camera modules, the proposed method would be easily generalized into other industries.