• Title/Summary/Keyword: Structured Light Image

Search Result 68, Processing Time 0.026 seconds

Robust Depth Measurement Using Dynamic Programming Technique on the Structured-Light Image (구조화 조명 영상에 Dynamic Programming을 사용한 신뢰도 높은 거리 측정 방법)

  • Wang, Shi;Kim, Hyong-Suk;Lin, Chun-Shin;Chen, Hong-Xin;Lin, Hai-Ping
    • Journal of Internet Computing and Services
    • /
    • v.9 no.3
    • /
    • pp.69-77
    • /
    • 2008
  • An algorithm for tracking the trace of structured light is proposed to obtain depth information accurately. The technique is based on the fact that the pixel location of light in an image has a unique association with the object depth. However, sometimes the projected light is dim or invisible due to the absorption and reflection on the surface of the object. A dynamic programming approach is proposed to solve such a problem. In this paper, necessary mathematics for implementing the algorithm is presented and the projected laser light is tracked utilizing a dynamic programming technique. Advantage is that the trace remains integrity while many parts of the laser beam are dim or invisible. Experimental results as well as the 3-D restoration are reported.

  • PDF

A Real-time Compact Structured-light based Range Sensing System

  • Hong, Byung-Joo;Park, Chan-Oh;Seo, Nam-Seok;Cho, Jun-Dong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.12 no.2
    • /
    • pp.193-202
    • /
    • 2012
  • In this paper, we propose a new approach for compact range sensor system for real-time robot applications. Instead of using off-the-shelf camera and projector, we devise a compact system with a CMOS image-sensor and a DMD (Digital Micro-mirror Device) that yields smaller dimension ($168{\times}50{\times}60mm$) and lighter weight (500g). We also realize one chip hard-wired processing of projection of structured-light and computing the range by exploiting correspondences between CMOS images-ensor and DMD. This application-specific chip processing is implemented on an FPGA in real-time. Our range acquisition system performs 30 times faster than the same implementation in software. We also devise an efficient methodology to identify a proper light intensity to enhance the quality of range sensor and minimize the decoding error. Our experimental results show that the total-error is reduced by 16% compared to the average case.

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

Coordinate Measuring Technique based on Optical Triangulation using the Two Images (두장의 사진을 이용한 광삼각법 삼차원측정)

  • 양주웅;이호재
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.76-80
    • /
    • 2000
  • This paper describes a coordinate measuring technique based on optical triangulation using the two images. To overcome the defect of structured light system which measures coordinate point by point, light source is replaced by CCD camera. Pixels in CCD camera were considered as virtual light source. The overall geometry including two camera images is modeled. Using this geometry, the formula for calculating 3D coordinate of specified point is derived. In a word, the ray from a virtual light source was reflected on measuring point and the corresponding image point was made on the other image. Through the simulation result, validation of formula is verified. This method enables to acquire multiple points detection by photographing.

  • PDF

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

Application of 3-D Scanner to Analysis of Functional Instability of the Ankle

  • Han, Cheng-Chun;Kubo, Masakazu;Matsusaka, Nobuou;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1971-1975
    • /
    • 2003
  • This paper describes a technique, which analyzes the functional instability of the ankle using three-dimensional scanner. The technique is based on the structured light pattern projection method, which is performed by using one digital still camera and one LCD projector. This system can be easily realized with the low cost. The measuring result has high accuracy. The measuring error is about 0.2 mm or less. Using this technique the three-dimensional posture of the leg and foot of the target person are measured and analyzed.

  • PDF

Hard calibration of a structured light for the Euclidian reconstruction (3차원 복원을 위한 구조적 조명 보정방법)

  • 신동조;양성우;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.183-186
    • /
    • 2003
  • A vision sensor should be calibrated prior to infer a Euclidian shape reconstruction. A point to point calibration. also referred to as a hard calibration, estimates calibration parameters by means of a set of 3D to 2D point pairs. We proposed a new method for determining a set of 3D to 2D pairs for the structured light hard calibration. It is simply determined based on epipolar geometry between camera image plane and projector plane, and a projector calibrating grid pattern. The projector calibration is divided two stages; world 3D data acquisition Stage and corresponding 2D data acquisition stage. After 3D data points are derived using cross ratio, corresponding 2D point in the projector plane can be determined by the fundamental matrix and horizontal grid ID of a projector calibrating pattern. Euclidian reconstruction can be achieved by linear triangulation. and experimental results from simulation are presented.

  • PDF

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.