• Title/Summary/Keyword: stereo sensor

Search Result 199, Processing Time 0.031 seconds

Design of ToF-Stereo Fusion Sensor System for 3D Spatial Scanning (3차원 공간 스캔을 위한 ToF-Stereo 융합 센서 시스템 설계)

  • Yun Ju Lee;Sun Kook Yoo
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.134-141
    • /
    • 2023
  • In this paper, we propose a ToF-Stereo fusion sensor system for 3D space scanning that increases the recognition rate of 3D objects, guarantees object detection quality, and is robust to the environment. The ToF-Stereo sensor fusion system uses a method of fusing the sensing values of the ToF sensor and the Stereo RGB sensor, and even if one sensor does not operate, the other sensor can be used to continuously detect an object. Since the quality of the ToF sensor and the Stereo RGB sensor varies depending on the sensing distance, sensing resolution, light reflectivity, and illuminance, a module that can adjust the function of the sensor based on reliability estimation is placed. The ToF-Stereo sensor fusion system combines the sensing values of the ToF sensor and the Stereo RGB sensor, estimates the reliability, and adjusts the function of the sensor according to the reliability to fuse the two sensing values, thereby improving the quality of the 3D space scan.

Comparison of Single-Sensor Stereo Model and Dual-Sensor Stereo Model with High-Resolution Satellite Imagery (고해상도 위성영상에서의 동종센서 스테레오 모델과 이종센서 스테레오 모델의 비교)

  • Jeong, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.421-432
    • /
    • 2015
  • There are significant differences in geometric property and stereo model accuracy between single-sensor stereo that uses two images taken by stereo acquisition mechanism within identical sensor and dual-sensor stereo that randomly combines two images taken from two different sensors. This paper compares the two types of stereo pairs thoroughly. For experiment, two single-sensor stereo pairs and four dual-sensor stereo pairs were constituted using SPOT-5 stereo and KOMPSAT-2 stereo covering same area. While the two single-sensor stereos have stable geometry, the dual-sensor stereos produced two stable and two unstable geometries. In particular, the unstable geometry led to a decrease in stereo model accuracy of the dual-sensor stereos. The two types of stereo pairs were also compared under the stable geometry. Overall, single-sensor stereos performed better than dual-sensor stereos for vertical mapping, but dual-sensor stereos was more accurate for horizontal mapping. This paper has revealed the differences of two types of stereos with their geometric properties and positioning accuracies, suggesting important considerations for handling satellite stereo images, particularly for dual-satellite stereo images.

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Measurement of GMAW Bead Geometry Using Biprism Stereo Vision Sensor (바이프리즘 스테레오 시각 센서를 이용한 GMA 용접 비드의 3차원 형상 측정)

  • 이지혜;이두현;유중돈
    • Journal of Welding and Joining
    • /
    • v.19 no.2
    • /
    • pp.200-207
    • /
    • 2001
  • Three-diemnsional bead profile was measured using the biprism stereo vision sensor in GMAW, which consists of an optical filter, biprism and CCD camera. Since single CCD camera is used, this system has various advantages over the conventional stereo vision system using two cameras such as finding the corresponding points along the horizontal scanline. In this wort, the biprism stereo vision sensor was designed for the GMAW, and the linear calibration method was proposed to determine the prism and camera parameters. Image processing techniques were employed to find the corresponding point along the pool boundary. The ism-intensity contour corresponding to the pool boundary was found in the pixel order and the filter-based matching algorithm was used to refine the corresponding points in the subpixel order. Predicted bead dimensions were in broad agreements with the measured results under the conditions of spray mode and humping bead.

  • PDF

An Obstacle Detection and Avoidance Method for Mobile Robot Using a Stereo Camera Combined with a Laser Slit

  • Kim, Chul-Ho;Lee, Tai-Gun;Park, Sung-Kee;Kim, Jai-Hie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.871-875
    • /
    • 2003
  • To detect and avoid obstacles is one of the important tasks of mobile navigation. In a real environment, when a mobile robot encounters dynamic obstacles, it is required to simultaneously detect and avoid obstacles for its body safely. In previous vision system, mobile robot has used it as either a passive sensor or an active sensor. This paper proposes a new obstacle detection algorithm that uses a stereo camera as both a passive sensor and an active sensor. Our system estimates the distances from obstacles by both passive-correspondence and active-correspondence using laser slit. The system operates in three steps. First, a far-off obstacle is detected by the disparity from stereo correspondence. Next, a close obstacle is acquired from laser slit beam projected in the same stereo image. Finally, we implement obstacle avoidance algorithm, adopting the modified Dynamic Window Approach (DWA), by using the acquired the obstacle's distance.

  • PDF

Fast Stereo Image Processing Method for Obstacle Detection of AGV System (AGV 시스템의 장애물 검출을 위한 고속 스테레오 영상처리 기법)

  • 전성재;조연상;박흥식
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.454-457
    • /
    • 2004
  • AGV for FMS must be detected an obstacle. Therefore, many studies have been advanced, and recently, the ultra sonic sensor is used for this. However, the new method has to be developed because the ultra-sonic-sensor has many problems as a noise in factory, an directional error and detection of the obstacle size. So, we study the fast stereo vision system that can give more information to obstacles for intelligent AGV system. For this, the simulated AGV system was made with two CCD cameras in front to get the stereo images, and the threshold process by color information (intensity and chromaticity) and structure stereo matching method were constructed.

  • PDF

Development of a Stereo Vision Sensor-based Volume Measurement and Cutting Location Estimation Algorithm for Portion Cutting (포션커팅을 위한 스테레오 비전 센서 기반 부피 측정 및 절단 위치 추정 알고리즘 개발)

  • Ho Jin Kim;Seung Hyun Jeong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.5
    • /
    • pp.219-225
    • /
    • 2024
  • In this study, an algorithm was developed to measure the volume of meat products passing through the conveyor line of a portion cutter using a stereo vision sensor and calculate the cutting position to cut them into the same weight unit. Previously, three or more laser profile sensors were used for this purpose. However, in this study, the volume was measured using four stereo vision sensors, and the accuracy of the developed algorithm was verified to confirm the applicability of the technique. The technique consists of stereo correction, scanning and outlier removal, and cutting position calculation procedures. The comparison between the volume measured using the developed algorithm and the results measured using an accurate 3D scanner confirmed an accuracy of 91%. Additionally, in the case of 50g target weight, where the cutting position calculation is crucial, the cutting position was calculated at a speed of about 2.98 seconds, further confirming the applicability of the developed technique.

RPC-based epipolar image resampling of Kompsat-2 across-track stereos (RPC를 기반으로 한 아리랑 2호 에피폴라 영상제작)

  • Oh, Jae-Hong;Lee, Hyo-Seong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.2
    • /
    • pp.157-164
    • /
    • 2011
  • As high-resolution satellite images have enabled large scale topographic mapping and monitoring on global scale with short revisit time, agile sensor orientation, and large swath width, many countries make effort to secure the satellite image information. In Korea, KOMPSAT-2 (KOrea Multi-Purpose SATellite-2) was launched in July 28 2006 with high specification. These satellites have stereo image acquisition capability for 3D mapping and monitoring. To efficiently handle stereo images such as stereo display and monitoring, the accurate epipolar image generation process is prerequisite. However, the process was highly limited due to complexity in epipolar geometry of pushbroom sensor. Recently, the piecewise approach to generate epipolar images using RPC was developed and tested for in-track IKONOS stereo images. In this paper, the piecewise approach was tested for KOMPSAT-2 across-track stereo images to see how accurately KOMPSAT-2 epipolar images can be generated for 3D geospatial applications. In the experiment, two across-track stereo sets from three KOMPSAT-2 images of different dates were tested using RPC as the sensor model. The test results showed that one-pixel level of y-parallax was achieved for manually measured tie points.

A Sensor Module Overcoming Thick Smoke through Investigation of Fire Characteristics (화재 특성 고찰을 통한 농연 극복 센서 모듈)

  • Cho, Min-Young;Shin, Dong-In;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.237-247
    • /
    • 2018
  • In this paper, we describe a sensor module that monitors fire environment by analyzing fire characteristics. We analyzed the smoke characteristics of indoor fire. Six different environments were defined according to the type of smoke and the flame, and the sensors available for each environment were combined. Based on this analysis, the sensors were selected from the perspective of firefighter. The sensor module consists of an RGB camera, an infrared camera and a radar. It is designed with minimum weight to fit on the robot. the enclosure of sensor is designed to protect against the radiant heat of the fire scene. We propose a single camera mode, thermal stereo mode, data fusion mode, and radar mode that can be used depending on the fire scene. Thermal stereo was effectively refined using an image segmentation algorithm, SLIC (Simple Linear Iterative Clustering). In order to reproduce the fire scene, three fire test environments were built and each sensor was verified.

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.