• Title/Summary/Keyword: stereo-camera

Search Result 610, Processing Time 0.02 seconds

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.4
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

A Practical Solution toward SLAM in Indoor environment Based on Visual Objects and Robust Sonar Features (가정환경을 위한 실용적인 SLAM 기법 개발 : 비전 센서와 초음파 센서의 통합)

  • Ahn, Sung-Hwan;Choi, Jin-Woo;Choi, Min-Yong;Chung, Wan-Kyun
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.25-35
    • /
    • 2006
  • Improving practicality of SLAM requires various sensors to be fused effectively in order to cope with uncertainty induced from both environment and sensors. In this case, combining sonar and vision sensors possesses numerous advantages of economical efficiency and complementary cooperation. Especially, it can remedy false data association and divergence problem of sonar sensors, and overcome low frequency SLAM update caused by computational burden and weakness in illumination changes of vision sensors. In this paper, we propose a SLAM method to join sonar sensors and stereo camera together. It consists of two schemes, extracting robust point and line features from sonar data and recognizing planar visual objects using multi-scale Harris corner detector and its SIFT descriptor from pre-constructed object database. And fusing sonar features and visual objects through EKF-SLAM can give correct data association via object recognition and high frequency update via sonar features. As a result, it can increase robustness and accuracy of SLAM in indoor environment. The performance of the proposed algorithm was verified by experiments in home -like environment.

  • PDF

Overlap Estimation for Panoramic Image Generation (중첩 영역 추정을 통한 파노라마 영상 생성)

  • Yang, Jihee;Jeon, Jihye;Park, Gooman
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.4
    • /
    • pp.32-37
    • /
    • 2014
  • The panorama is a good alternative to overcome narrow FOV under study in robot vision, stereo camera and panorama image registration and modeling. The panorama can materialize view with angles wider than human view and provide realistic space which make feeling of being on the scene based on realism. If we use all correspondence, it is too difficult to find strong features and correspondences and assume accurate homography matrix in geographic changes in images as load of calculation increases. Accordingly, we used SURF algorithm to estimate overlapping areas with high similarity by comparing and analyzing the input images' histograms and to detect features. And we solved the problem of input order so we can make panorama by input images without order.

Vision-based Ground Test for Active Debris Removal

  • Lim, Seong-Min;Kim, Hae-Dong;Seong, Jae-Dong
    • Journal of Astronomy and Space Sciences
    • /
    • v.30 no.4
    • /
    • pp.279-290
    • /
    • 2013
  • Due to the continuous space development by mankind, the number of space objects including space debris in orbits around the Earth has increased, and accordingly, difficulties of space development and activities are expected in the near future. In this study, among the stages for space debris removal, the implementation of a vision-based approach technique for approaching space debris from a far-range rendezvous state to a proximity state, and the ground test performance results were described. For the vision-based object tracking, the CAM-shift algorithm with high speed and strong performance, and the Kalman filter were combined and utilized. For measuring the distance to a tracking object, a stereo camera was used. For the construction of a low-cost space environment simulation test bed, a sun simulator was used, and in the case of the platform for approaching, a two-dimensional mobile robot was used. The tracking status was examined while changing the position of the sun simulator, and the results indicated that the CAM-shift showed a tracking rate of about 87% and the relative distance could be measured down to 0.9 m. In addition, considerations for future space environment simulation tests were proposed.

Development of A Vision-based Lane Detection System with Considering Sensor Configuration Aspect (센서 구성을 고려한 비전 기반 차선 감지 시스템 개발)

  • Park Jaehak;Hong Daegun;Huh Kunsoo;Park Jahnghyon;Cho Dongil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.97-104
    • /
    • 2005
  • Vision-based lane sensing systems require accurate and robust sensing performance in lane detection. Besides, there exists trade-off between the computational burden and processor cost, which should be considered for implementing the systems in passenger cars. In this paper, a stereo vision-based lane detection system is developed with considering sensor configuration aspects. An inverse perspective mapping method is formulated based on the relative correspondence between the left and right cameras so that the 3-dimensional road geometry can be reconstructed in a robust manner. A new monitoring model for estimating the road geometry parameters is constructed to reduce the number of the measured signals. The selection of the sensor configuration and specifications is investigated by utilizing the characteristics of standard highways. Based on the sensor configurations, it is shown that appropriate sensing region on the camera image coordinate can be determined. The proposed system is implemented on a passenger car and verified experimentally.

Frequency-Based Image Analysis of Random Patterns: an Alternative Way to Classical Stereocorrelation

  • Molimard, J.;Boyer, G.;Zahouani, H.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.30 no.3
    • /
    • pp.181-193
    • /
    • 2010
  • The paper presents an alternative way to classical stereocorrelation. First, 2D image processing of random patterns is described. Sub-pixel displacements are determined using phase analysis. Then distortion evaluation is presented. The distortion is identified without any assumption on the lens model because of the use of a grid technique approach. Last, shape measurement and shape variation is caught by fringe projection. Analysis is based on two pin-hole assumptions for the video-projector and the camera. Then, fringe projection is coupled to in-plane displacement to give rise to 3D measurement set-up. Metrological characterization shows a resolution comparable to classical (stereo) correlation technique ($1/100^{th}$ pixel). Spatial resolution seems to be an advantage of the method, because of the use of temporal phase stepping (shape measurement, 1 pixel) and windowed Fourier transform (in plane displacements measurement, 9 pixels). Two examples are given. First one is the study of skin properties; second one is a study on leather fabric. In both cases, results are convincing, and have been exploited to give mechanical interpretation.

Dense Thermal 3D Point Cloud Generation of Building Envelope by Drone-based Photogrammetry

  • Jo, Hyeon Jeong;Jang, Yeong Jae;Lee, Jae Wang;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.2
    • /
    • pp.73-79
    • /
    • 2021
  • Recently there are growing interests on the energy conservation and emission reduction. In the fields of architecture and civil engineering, the energy monitoring of structures is required to response the energy issues. In perspective of thermal monitoring, thermal images gains popularity for their rich visual information. With the rapid development of the drone platform, aerial thermal images acquired using drone can be used to monitor not only a part of structure, but wider coverage. In addition, the stereo photogrammetric process is expected to generate 3D point cloud with thermal information. However thermal images show very poor in resolution with narrow field of view that limit the use of drone-based thermal photogrammety. In the study, we aimed to generate 3D thermal point cloud using visible and thermal images. The visible images show high spatial resolution being able to generate precise and dense point clouds. Then we extract thermal information from thermal images to assign them onto the point clouds by precisely establishing photogrammetric collinearity between the point clouds and thermal images. From the experiment, we successfully generate dense 3D thermal point cloud showing 3D thermal distribution over the building structure.

Efficient Tire Wear and Defect Detection Algorithm Based on Deep Learning (심층학습 기법을 활용한 효과적인 타이어 마모도 분류 및 손상 부위 검출 알고리즘)

  • Park, Hye-Jin;Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1026-1034
    • /
    • 2021
  • Tire wear and defect are important factors for safe driving condition. These defects are generally inspected by some specialized experts or very expensive equipments such as stereo depth camera and depth gauge. In this paper, we propose tire safety vision inspector based on deep neural network (DNN). The status of tire wear is categorized into three: 'safety', 'warning', and 'danger' based on depth of tire tread. We propose an attention mechanism for emphasizing the feature of tread area. The attention-based feature is concatenated to output feature maps of the last convolution layer of ResNet-101 to extract more robust feature. Through experiments, the proposed tire wear classification model improves 1.8% of accuracy compared to the existing ResNet-101 model. For detecting the tire defections, the developed tire defect detection model shows up-to 91% of accuracy using the Mask R-CNN model. From these results, we can see that the suggested models are useful for checking on the safety condition of working tire in real environment.

Method of Measuring Color Difference Between Images using Corresponding Points and Histograms (대응점 및 히스토그램을 이용한 영상 간의 컬러 차이 측정 기법)

  • Hwang, Young-Bae;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.305-315
    • /
    • 2012
  • Color correction between two or multiple images is very crucial for the development of subsequent algorithms and stereoscopic 3D camera system. Even though various color correction methods are proposed recently, there are few methods for measuring the performance of these methods. In addition, when two images have view variation by camera positions, previous methods for the performance measurement may not be appropriate. In this paper, we propose a method of measuring color difference between corresponding images for color correction. This method finds matching points that have the same colors between two scenes to consider the view variation by correspondence searches. Then, we calculate statistics from neighbor regions of these matching points to measure color difference. From this approach, we can consider misalignment of corresponding points contrary to conventional geometric transformation by a single homography. To handle the case that matching points cannot cover the whole regions, we calculate statistics of color difference from the whole image regions. Finally, the color difference is computed by the weighted summation between correspondence based and the whole region based approaches. This weight is determined by calculating the ratio of occupying regions by correspondence based color comparison.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.