• Title/Summary/Keyword: Stereo vision sensor

Search Result 73, Processing Time 0.036 seconds

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

A Study on the Sensor Calibration of Motion Capture System using PSD Sensor to Improve the Accuracy (PSD 센서를 이용한 모션캡쳐센서의 정밀도 향상을 위한 보정에 관한 연구)

  • Choi, Hun-Il;Jo, Yong-Jun;Ryu, Young-Kee
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.583-585
    • /
    • 2004
  • In this paper we will deal with a calibration method for low cost motion capture system using psd(position sensitive detection) optical sensor. To measure the incident direction of the light from LED emitted marker, the PSD is used the output current ratio on the electrode of PSD is proportional with the incident position of the light focused by lens. In order to defect the direction of the light, the current output is converted into digital voltage value by opamp circuits peak detector and AD converter with the digital value the incident position is measured. Unfortunately, due to the non-linearly problem of the circuit poor position accuracy is shown. To overcome such problems, we compensated the non-linearly by using least-square fitting method. After compensated the non-linearly in the circuit, the system showed more enhanced position accuracy.

  • PDF

Overview of sensor fusion techniques for vehicle positioning (차량정밀측위를 위한 복합측위 기술 동향)

  • Park, Jin-Won;Choi, Kae-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.139-144
    • /
    • 2016
  • This paper provides an overview of recent trends in sensor fusion technologies for vehicle positioning. The GNSS by itself cannot satisfy precision and reliability required by autonomous driving. We survey sensor fusion techniques that combine the outputs from the GNSS and the inertial navigation sensors such as an odometer and a gyroscope. Moreover, we overview landmark-based positioning that matches landmarks detected by a lidar or a stereo vision to high-precision digital maps.

Hand/Eye calibration of Robot arms with a 3D visual sensing system (3차원 시각 센서를 탑재한로봇의 Hand/Eye 캘리브레이션)

  • 김민영;노영준;조형석;김재훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.76-76
    • /
    • 2000
  • The calibration of the robot system with a visual sensor consists of robot, hand-to-eye, and sensor calibration. This paper describe a new technique for computing 3D position and orientation of a 3D sensor system relative to the end effect of a robot manipulator in an eye-on-hand robot configuration. When the 3D coordinates of the feature points at each robot movement and the relative robot motion between two robot movements are known, a homogeneous equation of the form AX : XB is derived. To solve for X uniquely, it is necessary to make two robot arm movements and form a system of two equation of the form: A$_1$X : XB$_1$ and A$_2$X = XB$_2$. A closed-form solution to this system of equations is developed and the constraints for solution existence are described in detail. Test results through a series of simulation show that this technique is simple, efficient, and accurate fur hand/eye calibration.

  • PDF

Performance Comparison of Depth Map Based Landing Methods for a Quadrotor in Unknown Environment (미지 환경에서의 깊이지도를 이용한 쿼드로터 착륙방식 성능 비교)

  • Choi, Jong-Hyuck;Park, Jongho;Lim, Jaesung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.9
    • /
    • pp.639-646
    • /
    • 2022
  • Landing site searching algorithms are developed for a quadrotor using a depth map in unknown environment. Guidance and control system of Unmanned Aerial Vehicle (UAV) consists of a trajectory planner, a position and an attitude controller. Landing site is selected based on the information of the depth map which is acquired by a stereo vision sensor attached on the gimbal system pointing downwards. Flatness information is obtained by the maximum depth difference of a predefined depth map region, and the distance from the UAV is also considered. This study proposes three landing methods and compares their performance using various indices such as UAV travel distance, map accuracy, obstacle response time etc.

Unmanned Vehicle System Configuration using All Terrain Vehicle

  • Moon, Hee-Chang;Park, Eun-Young;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1550-1554
    • /
    • 2004
  • This paper deals with an unmanned vehicle system configuration using all terrain vehicle. Many research institutes and university study and develop unmanned vehicle system and control algorithm. Now a day, they try to apply unmanned vehicle to use military device and explore space and deep sea. These unmanned vehicles can help us to work is difficult task and approach. In the previous research of unmanned vehicle in our lab, we used 1/10 scale radio control vehicle and composed the unmanned vehicle system using ultrasonic sensors, CCD camera and kinds of sensor for vehicle's motion control. We designed lane detecting algorithm using vision system and obstacle detecting and avoidance algorithm using ultrasonic sensor and infrared ray sensor. As the system is increased, it is hard to compose the system on the 1/10 scale RC car. So we have to choose a new vehicle is bigger than 1/10 scale RC car but it is smaller than real size vehicle. ATV(all terrain vehicle) and real size vehicle have similar structure and its size is smaller. In this research, we make unmanned vehicle using ATV and explain control theory of each component

  • PDF

A High Speed Vision Algorithms for Axial Motion Sensor

  • Mousset, Stephane;Miche, Pierre;Bensrhair, Abdelaziz;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.6
    • /
    • pp.394-400
    • /
    • 1998
  • In this paper, we present a robust and fast method that enables real-time computing of axial motion component of different points of a scene from a stereo images sequence. The aim of our method is to establish axial motion maps by computing a range of disparity maps. We propose a solution in two steps. In the first step we estimate motion with a low level computing for an image point by a detection estimation-structure. In the second step, we use the neighbourhood information of the image point with morphology operation. The motion maps are established with a constant computation time without spatio-temporal matching.

  • PDF

A Study on Vehicle Ego-motion Estimation by Optimizing a Vehicle Platform (차량 플랫폼에 최적화한 자차량 에고 모션 추정에 관한 연구)

  • Song, Moon-Hyung;Shin, Dong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.9
    • /
    • pp.818-826
    • /
    • 2015
  • This paper presents a novel methodology for estimating vehicle ego-motion, i.e. tri-axis linear velocities and angular velocities by using stereo vision sensor and 2G1Y sensor (longitudinal acceleration, lateral acceleration, and yaw rate). The estimated ego-motion information can be utilized to predict future ego-path and improve the accuracy of 3D coordinate of obstacle by compensating for disturbance from vehicle movement representatively for collision avoidance system. For the purpose of incorporating vehicle dynamic characteristics into ego-motion estimation, the state evolution model of Kalman filter has been augmented with lateral vehicle dynamics and the vanishing point estimation has been also taken into account because the optical flow radiates from a vanishing point which might be varied due to vehicle pitch motion. Experimental results based on real-world data have shown the effectiveness of the proposed methodology in view of accuracy.

Localization Algorithm for Lunar Rover using IMU Sensor and Vision System (IMU 센서와 비전 시스템을 활용한 달 탐사 로버의 위치추정 알고리즘)

  • Kang, Hosun;An, Jongwoo;Lim, Hyunsoo;Hwang, Seulwoo;Cheon, Yuyeong;Kim, Eunhan;Lee, Jangmyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • In this paper, we propose an algorithm that estimates the location of lunar rover using IMU and vision system instead of the dead-reckoning method using IMU and encoder, which is difficult to estimate the exact distance due to the accumulated error and slip. First, in the lunar environment, magnetic fields are not uniform, unlike the Earth, so only acceleration and gyro sensor data were used for the localization. These data were applied to extended kalman filter to estimate Roll, Pitch, Yaw Euler angles of the exploration rover. Also, the lunar module has special color which can not be seen in the lunar environment. Therefore, the lunar module were correctly recognized by applying the HSV color filter to the stereo image taken by lunar rover. Then, the distance between the exploration rover and the lunar module was estimated through SIFT feature point matching algorithm and geometry. Finally, the estimated Euler angles and distances were used to estimate the current position of the rover from the lunar module. The performance of the proposed algorithm was been compared to the conventional algorithm to show the superiority of the proposed algorithm.