• Title/Summary/Keyword: Vision sensing

Search Result 209, Processing Time 0.022 seconds

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

Adaptive Processing for Feature Extraction: Application of Two-Dimensional Gabor Function

  • Lee, Dong-Cheon
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.4
    • /
    • pp.319-334
    • /
    • 2001
  • Extracting primitives from imagery plays an important task in visual information processing since the primitives provide useful information about characteristics of the objects and patterns. The human visual system utilizes features without difficulty for image interpretation, scene analysis and object recognition. However, to extract and to analyze feature are difficult processing. The ultimate goal of digital image processing is to extract information and reconstruct objects automatically. The objective of this study is to develop robust method to achieve the goal of the image processing. In this study, an adaptive strategy was developed by implementing Gabor filters in order to extract feature information and to segment images. The Gabor filters are conceived as hypothetical structures of the retinal receptive fields in human vision system. Therefore, to develop a method which resembles the performance of human visual perception is possible using the Gabor filters. A method to compute appropriate parameters of the Gabor filters without human visual inspection is proposed. The entire framework is based on the theory of human visual perception. Digital images were used to evaluate the performance of the proposed strategy. The results show that the proposed adaptive approach improves performance of the Gabor filters for feature extraction and segmentation.

A Study on Extraction Depth Information Using a Non-parallel Axis Image (사각영상을 이용한 물체의 고도정보 추출에 관한 연구)

  • 이우영;엄기문;박찬응;이쾌희
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.2
    • /
    • pp.7-19
    • /
    • 1993
  • In stereo vision, when we use two parallel axis images, small portion of object is contained and B/H(Base-line to Height) ratio is limited due to the size of object and depth information is inaccurate. To overcome these difficulities we take a non-parallel axis image which is rotated $\theta$ about y-axis and match other parallel-axis image. Epipolar lines of non-parallel axis image are not same as those of parallel-axis image and we can't match these two images directly. In this paper, we transform the non-parallel axis image geometrically with camera parameters, whose epipolar lines are alingned parallel. NCC(Normalized Cross Correlation) is used as match measure, area-based matching technique is used find correspondence and 9$\times$9 window size is used, which is chosen experimentally. Focal length which is necessary to get depth information of given object is calculated with least-squares method by CCD camera characteristics and lenz property. Finally, we select 30 test points from given object whose elevation is varied to 150 mm, calculate heights and know that height RMS error is 7.9 mm.

Single Pixel Compressive Camera for Fast Video Acquisition using Spatial Cluster Regularization

  • Peng, Yang;Liu, Yu;Lu, Kuiyan;Zhang, Maojun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.11
    • /
    • pp.5481-5495
    • /
    • 2018
  • Single pixel imaging technology has developed for years, however the video acquisition on the single pixel camera is not a well-studied problem in computer vision. This work proposes a new scheme for single pixel camera to acquire video data and a new regularization for robust signal recovery algorithm. The method establishes a single pixel video compressive sensing scheme to reconstruct the video clips in spatial domain by recovering the difference of the consecutive frames. Different from traditional data acquisition method works in transform domain, the proposed scheme reconstructs the video frames directly in spatial domain. At the same time, a new regularization called spatial cluster is introduced to improve the performance of signal reconstruction. The regularization derives from the observation that the nonzero coefficients often tend to be clustered in the difference of the consecutive video frames. We implement an experiment platform to illustrate the effectiveness of the proposed algorithm. Numerous experiments show the well performance of video acquisition and frame reconstruction on single pixel camera.

A Study on a Dual Electromagnetic Sensor System for Weld Seam Tracking of I-Butt Joints

  • Kim, J.-W.;Shin, J.-H.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.2
    • /
    • pp.51-56
    • /
    • 2002
  • The weld seam tracking system for arc welding process uses various kinds of sensors such as arc sensor, vision sensor, laser displacement sensor and so on. Among the variety of sensors available, electro-magnetic sensor is one of the most useful methods especially in sheet metal butt-joint arc welding, primarily because it is hardly affected by the intense arc light and fume generated during the welding process, and also by the surface condition of weldments. In this study, a dual-electromagnetic sensor, which utilizes the induced current variation in the sensing coil due to the eddy current variation of the metal near the sensor, was developed for arc welding of sheet metal I-butt joints. The dual-electromagnetic sensor thus detects the offset displacement of weld line from the center of sensor head even though there's no clearance in the joint. A set of design variables of the sensor was determined far the maximum sensing capability through the repeated experiments. Seam tracking is performed by correcting the position of sensor to the amount of offset displacement every sampling period. From the experimental results, the developed sensor showed the excellent capability of weld seam detection when the sensor to workpiece distance is near less than 5 ㎜, and it was revealed that the system has excellent seam tracking ability for the I-butt joint of sheet metal.

  • PDF

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

Computer Vision Based Measurement, Error Analysis and Calibration (컴퓨터 시각(視覺)에 의거한 측정기술(測定技術) 및 측정오차(測定誤差)의 분석(分析)과 보정(補正))

  • Hwang, H.;Lee, C.H.
    • Journal of Biosystems Engineering
    • /
    • v.17 no.1
    • /
    • pp.65-78
    • /
    • 1992
  • When using a computer vision system for a measurement, the geometrically distorted input image usually restricts the site and size of the measuring window. A geometrically distorted image caused by the image sensing and processing hardware degrades the accuracy of the visual measurement and prohibits the arbitrary selection of the measuring scope. Therefore, an image calibration is inevitable to improve the measuring accuracy. A calibration process is usually done via four steps such as measurement, modeling, parameter estimation, and compensation. In this paper, the efficient error calibration technique of a geometrically distorted input image was developed using a neural network. After calibrating a unit pixel, the distorted image was compensated by training CMLAN(Cerebellar Model Linear Associator Network) without modeling the behavior of any system element. The input/output training pairs for the network was obtained by processing the image of the devised sampled pattern. The generalization property of the network successfully compensates the distortion errors of the untrained arbitrary pixel points on the image space. The error convergence of the trained network with respect to the network control parameters were also presented. The compensated image through the network was then post processed using a simple DDA(Digital Differential Analyzer) to avoid the pixel disconnectivity. The compensation effect was verified using known sized geometric primitives. A way to extract directly a real scaled geometric quantity of the object from the 8-directional chain coding was also devised and coded. Since the developed calibration algorithm does not require any knowledge of modeling system elements and estimating parameters, it can be applied simply to any image processing system. Furthermore, it efficiently enhances the measurement accuracy and allows the arbitrary sizing and locating of the measuring window. The applied and developed algorithms were coded as a menu driven way using MS-C language Ver. 6.0, PC VISION PLUS library functions, and VGA graphic functions.

  • PDF

Analysis of 3D Motion Recognition using Meta-analysis for Interaction (기존 3차원 인터랙션 동작인식 기술 현황 파악을 위한 메타분석)

  • Kim, Yong-Woo;Whang, Min-Cheol;Kim, Jong-Hwa;Woo, Jin-Cheol;Kim, Chi-Jung;Kim, Ji-Hye
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.6
    • /
    • pp.925-932
    • /
    • 2010
  • Most of the research on three-dimensional interaction field have showed different accuracy in terms of sensing, mode and method. Furthermore, implementation of interaction has been a lack of consistency in application field. Therefore, this study is to suggest research trends of three-dimensional interaction using meta-analysis. Searching relative keyword in database provided with 153 domestic papers and 188 international papers covering three-dimensional interaction. Analytical coding tables determined 18 domestic papers and 28 international papers for analysis. Frequency analysis was carried out on method of action, element, number, accuracy and then verified accuracy by effect size of the meta-analysis. As the results, the effect size of sensor-based was higher than vision-based, but the effect size was extracted to small as 0.02. The effect size of vision-based using hand motion was higher than sensor-based using hand motion. Therefore, implementation of three-dimensional sensor-based interaction and vision-based using hand motions more efficient. This study was significant to comprehensive analysis of three-dimensional motion recognition for interaction and suggest to application directions of three-dimensional interaction.

Image Recognition Using Colored-hear Transformation Based On Human Synesthesia (인간의 공감각에 기반을 둔 색청변환을 이용한 영상 인식)

  • Shin, Seong-Yoon;Moon, Hyung-Yoon;Pyo, Seong-Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.135-141
    • /
    • 2008
  • In this paper, we propose colored-hear recognition that distinguishing feature of synesthesia for human sensing by shared vision and specific sense of hearing. We perceived what potential influence of human's structured object recognition by visual analysis through the camera, So we've studied how to make blind persons can feel similar vision of real object. First of all, object boundaries are detected in the image data representing a specific scene. Then, four specific features such as object location in the image focus, feeling of average color, distance information of each object, and object area are extracted from picture. Finally, mapping these features to the audition factors. The audition factors are used to recognize vision for blind persons. Proposed colored-hear transformation for recognition can get fast and detail perception, and can be transmit information for sense at the same time. Thus, we were get a food result when applied this concepts to blind person's case of image recognition.

  • PDF

A hierarchical semantic segmentation framework for computer vision-based bridge damage detection

  • Jingxiao Liu;Yujie Wei ;Bingqing Chen;Hae Young Noh
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.325-334
    • /
    • 2023
  • Computer vision-based damage detection enables non-contact, efficient and low-cost bridge health monitoring, which reduces the need for labor-intensive manual inspection or that for a large number of on-site sensing instruments. By leveraging recent semantic segmentation approaches, we can detect regions of critical structural components and identify damages at pixel level on images. However, existing methods perform poorly when detecting small and thin damages (e.g., cracks); the problem is exacerbated by imbalanced samples. To this end, we incorporate domain knowledge to introduce a hierarchical semantic segmentation framework that imposes a hierarchical semantic relationship between component categories and damage types. For instance, certain types of concrete cracks are only present on bridge columns, and therefore the noncolumn region may be masked out when detecting such damages. In this way, the damage detection model focuses on extracting features from relevant structural components and avoid those from irrelevant regions. We also utilize multi-scale augmentation to preserve contextual information of each image, without losing the ability to handle small and/or thin damages. In addition, our framework employs an importance sampling, where images with rare components are sampled more often, to address sample imbalance. We evaluated our framework on a public synthetic dataset that consists of 2,000 railway bridges. Our framework achieves a 0.836 mean intersection over union (IoU) for structural component segmentation and a 0.483 mean IoU for damage segmentation. Our results have in total 5% and 18% improvements for the structural component segmentation and damage segmentation tasks, respectively, compared to the best-performing baseline model.