• Title/Summary/Keyword: Stereo vision sensor

Search Result 73, Processing Time 0.034 seconds

A Distance Measurement System Using a Laser Pointer and a Monocular Vision Sensor (레이저포인터와 단일카메라를 이용한 거리측정 시스템)

  • Jeon, Yeongsan;Park, Jungkeun;Kang, Taesam;Lee, Jeong-Oog
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.5
    • /
    • pp.422-428
    • /
    • 2013
  • Recently, many unmanned aerial vehicle (UAV) studies have focused on small UAVs, because they are cost effective and suitable in dangerous indoor environments where human entry is limited. Map building through distance measurement is a key technology for the autonomous flight of small UAVs. In many researches for unmanned systems, distance could be measured by using laser range finders or stereo vision sensors. Even though a laser range finder provides accurate distance measurements, it has a disadvantage of high cost. Calculating the distance using a stereo vision sensor is straightforward. However, the sensor is large and heavy, which is not suitable for small UAVs with limited payload. This paper suggests a low-cost distance measurement system using a laser pointer and a monocular vision sensor. A method to measure distance using the suggested system is explained and some experiments on map building are conducted with these distance measurements. The experimental results are compared to the actual data and the reliability of the suggested system is verified.

A Study on Lane Sensing System Using Stereo Vision Sensors (스테레오 비전센서를 이용한 차선감지 시스템 연구)

  • Huh, Kun-Soo;Park, Jae-Sik;Rhee, Kwang-Woon;Park, Jae-Hak
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.28 no.3
    • /
    • pp.230-237
    • /
    • 2004
  • Lane Sensing techniques based on vision sensors are regarded promising because they require little infrastructure on the highway except clear lane markers. However, they require more intelligent processing algorithms in vehicles to generate the previewed roadway from the vision images. In this paper, a lane sensing algorithm using vision sensors is developed to improve the sensing robustness. The parallel stereo-camera is utilized to regenerate the 3-dimensional road geometry. The lane geometry models are derived such that their parameters represent the road curvature, lateral offset and heading angle, respectively. The parameters of the lane geometry models are estimated by the Kalman filter and utilized to reconstruct the lane geometry in the global coordinate. The inverse perspective mapping from the image plane to the global coordinate considers roll and pitch motions of a vehicle so that the mapping error is minimized during acceleration, braking or steering. The proposed sensing system has been built and implemented on a 1/10-scale model car.

Development of Vision based Passenger Monitoring System for Passenger's Safety in Railway Station (철도 승강장 승객 안전을 위한 영상처리식 모니터링시스템 개발)

  • Oh, Seh-Chan;Park, Sung-Hyuk;Lee, Han-Min;Kim, Gil-Dong;Lee, Chang-Mu
    • Proceedings of the KSR Conference
    • /
    • 2008.11b
    • /
    • pp.1354-1359
    • /
    • 2008
  • In this paper, we propose a vision based passenger monitoring system for passenger's safety in railway station. Since 2005, Korea Railroad Research Institute (KRRI) has developed a vision based monitoring system, funded by Korean government, for passenger's safety in railway station. The proposed system uses various types of sensors, such as, stereo camera, thermal-camera and infrared sensor, in order to detects danger situations in platform area. Especially, detection process of the system exploits the stereo vision algorithm to improve detection accuracy. The paper describes the overall system configuration and proposed detection algorithm, and then verifies the system performance with extensive experimental results in a real station environment.

  • PDF

A Study on the Sensor Calibration for Low Cost Motion Capture Sensor using PSD Sensor (PSD센서를 이용한 모션캡쳐 시스템의 센서보정에 관한 연구)

  • Kim, Yu-Geon;Choi, Hun-Il;Ryu, Young-Kee;Oh, Choon-Suk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.603-605
    • /
    • 2005
  • In this paper, we deal with a calibration method for low cost motion capture sensor using PSD (Position Sensitive Detection). The PSD sensor is employed to measure the direction of incident light from moving markers attached to motion body. To calibrate the PSD optical module, a conventional camera calibration algorithm introduced by Tsai. The 3-dimensional positions of the markers are measured by using stereo camera geometry. From the experimental results, the low cost motion capture sensor can be used in a real time system.

  • PDF

INS/Multi-Vision Integrated Navigation System Based on Landmark (다수의 비전 센서와 INS를 활용한 랜드마크 기반의 통합 항법시스템)

  • Kim, Jong-Myeong;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.8
    • /
    • pp.671-677
    • /
    • 2017
  • A new INS/Vision integrated navigation system by using multi-vision sensors is addressed in this paper. When the total number of landmark measured by the vision sensor is smaller than the allowable number, there is possibility that the navigation filter can diverge. To prevent this problem, multi-vision concept is applied to expend the field of view so that reliable number of landmarks are always guaranteed. In this work, the orientation of camera installed are 0, 120, and -120degree with respect to the body frame to improve the observability. Finally, the proposed technique is verified by using numerical simulation.

3D Orientation and Position Tracking System of Surgical Instrument with Optical Tracker and Internal Vision Sensor (광추적기와 내부 비전센서를 이용한 수술도구의 3차원 자세 및 위치 추적 시스템)

  • Joe, Young Jin;Oh, Hyun Min;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.579-584
    • /
    • 2016
  • When surgical instruments are tracked in an image-guided surgical navigation system, a stereo vision system with high accuracy is generally used, which is called optical tracker. However, this optical tracker has the disadvantage that a line-of-sight between the tracker and surgical instrument must be maintained. Therefore, to complement the disadvantage of optical tracking systems, an internal vision sensor is attached to a surgical instrument in this paper. Monitoring the target marker pattern attached on patient with this vision sensor, this surgical instrument is possible to be tracked even when the line-of-sight of the optical tracker is occluded. To verify the system's effectiveness, a series of basic experiments is carried out. Lastly, an integration experiment is conducted. The experimental results show that rotational error is bounded to max $1.32^{\circ}$ and mean $0.35^{\circ}$, and translation error is in max 1.72mm and mean 0.58mm. Finally, it is confirmed that the proposed tool tracking method using an internal vision sensor is useful and effective to overcome the occlusion problem of the optical tracker.

Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation (도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반)

  • Lee, Sangjae;Hyun, Jongkil;Kwon, Yeon Soo;Shim, Jae Hoon;Moon, Byungin
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.

Dimensional Quality Assessment for Assembly Part of Prefabricated Steel Structures Using a Stereo Vision Sensor (스테레오 비전 센서 기반 프리팹 강구조물 조립부 형상 품질 평가)

  • Jonghyeok Kim;Haemin Jeon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.173-178
    • /
    • 2024
  • This study presents a technique for assessing the dimensional quality of assembly parts in Prefabricated Steel Structures (PSS) using a stereo vision sensor. The stereo vision system captures images and point cloud data of the assembly area, followed by applying image processing algorithms such as fuzzy-based edge detection and Hough transform-based circular bolt hole detection to identify bolt hole locations. The 3D center positions of each bolt hole are determined by correlating 3D real-world position information from depth images with the extracted bolt hole positions. Principal Component Analysis (PCA) is then employed to calculate coordinate axes for precise measurement of distances between bolt holes, even when the sensor and structure orientations differ. Bolt holes are sorted based on their 2D positions, and the distances between sorted bolt holes are calculated to assess the assembly part's dimensional quality. Comparison with actual drawing data confirms measurement accuracy with an absolute error of 1mm and a relative error within 4% based on median criteria.

Motion and Structure Estimation Using Fusion of Inertial and Vision Data for Helmet Tracker

  • Heo, Se-Jong;Shin, Ok-Shik;Park, Chan-Gook
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.11 no.1
    • /
    • pp.31-40
    • /
    • 2010
  • For weapon cueing and Head-Mounted Display (HMD), it is essential to continuously estimate the motion of the helmet. The problem of estimating and predicting the position and orientation of the helmet is approached by fusing measurements from inertial sensors and stereo vision system. The sensor fusion approach in this paper is based on nonlinear filtering, especially expended Kalman filter(EKF). To reduce the computation time and improve the performance in vision processing, we separate the structure estimation and motion estimation. The structure estimation tracks the features which are the part of helmet model structure in the scene and the motion estimation filter estimates the position and orientation of the helmet. This algorithm is tested with using synthetic and real data. And the results show that the result of sensor fusion is successful.