• Title/Summary/Keyword: omnidirectional vision

Search Result 23, Processing Time 0.043 seconds

Multi-robot Mapping Using Omnidirectional-Vision SLAM Based on Fisheye Images

  • Choi, Yun-Won;Kwon, Kee-Koo;Lee, Soo-In;Choi, Jeong-Won;Lee, Suk-Gyu
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.913-923
    • /
    • 2014
  • This paper proposes a global mapping algorithm for multiple robots from an omnidirectional-vision simultaneous localization and mapping (SLAM) approach based on an object extraction method using Lucas-Kanade optical flow motion detection and images obtained through fisheye lenses mounted on robots. The multi-robot mapping algorithm draws a global map by using map data obtained from all of the individual robots. Global mapping takes a long time to process because it exchanges map data from individual robots while searching all areas. An omnidirectional image sensor has many advantages for object detection and mapping because it can measure all information around a robot simultaneously. The process calculations of the correction algorithm are improved over existing methods by correcting only the object's feature points. The proposed algorithm has two steps: first, a local map is created based on an omnidirectional-vision SLAM approach for individual robots. Second, a global map is generated by merging individual maps from multiple robots. The reliability of the proposed mapping algorithm is verified through a comparison of maps based on the proposed algorithm and real maps.

Design and Analysis of Illumination Optics for Image Uniformity in Omnidirectional Vision Inspection System for Screw Threads (나사산 전면검사 비전시스템의 영상 균일도 향상을 위한 조명 광학계 설계 및 해석)

  • Lee, Chang Hun;Lim, Yeong Eun;Park, Keun;Ra, Seung Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.3
    • /
    • pp.261-268
    • /
    • 2014
  • Precision screws have a wide range of industrial applications such as electrical and automotive products. To produce screw threads with high precision, not only high precision manufacturing technology but also reliable measurement technology is required. Machine vision systems have been used in the automatic inspection of screw threads based on backlight illumination, which cannot detect defects on the thread surface. Recently, an omnidirectional inspection system for screw threads was developed to obtain $360^{\circ}$ images of screws, based on front light illumination. In this study, the illumination design for the omnidirectional inspection system was modified by adding a light shield to improve the image uniformity. Optical simulation for various shield designs was performed to analyze image uniformity of the obtained images. The simulation results were analyzed statistically using response surface method, from which optical performance of the omnidirectional inspection system could be optimized in terms of image quality and uniformity.

An Omnidirectional Vision-Based Moving Obstacle Detection in Mobile Robot

  • Kim, Jong-Cheol;Suga, Yasuo
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.6
    • /
    • pp.663-673
    • /
    • 2007
  • This paper presents a new moving obstacle detection method using an optical flow in mobile robot with an omnidirectional camera. Because an omnidirectional camera consists of a nonlinear mirror and CCD camera, the optical flow pattern in omnidirectional image is different from the pattern in perspective camera. The geometry characteristic of an omnidirectional camera has influence on the optical flow in omnidirectional image. When a mobile robot with an omnidirectional camera moves, the optical flow is not only theoretically calculated in omnidirectional image, but also investigated in omnidirectional and panoramic images. In this paper, the panoramic image is generalized from an omnidirectional image using the geometry of an omnidirectional camera. In particular, Focus of expansion (FOE) and focus of contraction (FOC) vectors are defined from the estimated optical flow in omnidirectional and panoramic images. FOE and FOC vectors are used as reference vectors for the relative evaluation of optical flow. The moving obstacle is turned out through the relative evaluation of optical flows. The proposed algorithm is tested in four motions of a mobile robot including straight forward, left turn, right turn and rotation. The effectiveness of the proposed method is shown by the experimental results.

Development of Annular Optics for the Inspection of Surface Defects on Screw Threads Using Ray Tracing Simulation (광선추적을 사용한 나사산 표면결함 검사용 환형 광학계 개발)

  • Lee, Jiwon;Lim, Yeong Eun;Park, Keun;Ra, Seung Woo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.6
    • /
    • pp.491-497
    • /
    • 2016
  • This study aims to develop a vision inspection system for screw threads. To inspect external defects in screw threads, the vision inspection system was developed using front light illumination from which bright images can be obtained. The front light system, however, requires multiple side images for inspection of the entire thread surface, which can be performed by omnidirectional optics. In this study, an omnidirectional optical system was designed to obtain annular images of screw threads using an image sensor and two reflection mirrors; one large concave mirror and one small convex mirror. Optical simulations using backward and forward ray tracing were performed to determine the dimensional parameters of the proposed optical system, so that an annular image of the screw threads could be obtained with high quality and resolution. Microscale surface defects on the screw threads could be successfully detected using the developed annular inspection system.

Single Camera Omnidirectional Stereo Imaging System (단일 카메라 전방향 스테레오 영상 시스템)

  • Yi, Soo-Yeong;Choi, Byung-Wook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.400-405
    • /
    • 2009
  • A new method for the catadioptric omnidirectional stereo vision with single camera is presented in this paper. The proposed method uses a concave lens with a convex mirror. Since the optical part of the proposed method is simple and commercially available, the resultant omnidirectional stereo system becomes versatile and cost-effective. The closed-form solution for 3D distance computation is presented based on the simple optics including the reflection and the reflection of the convex mirror and the concave lens. The compactness of the system and the simplicity of the image processing make the omnidirectional stereo system appropriate for real-time applications such as autonomous navigation of a mobile robot or the object manipulation. In order to verify the feasibility of the proposed method, an experimental prototype is implemented.

Global Positioning of a Mobile Robot based on Color Omnidirectional Image Understanding (컬러 전방향 영상 이해에 기반한 이동 로봇의 위치 추정)

  • Kim, Tae-Gyun;Lee, Yeong-Jin;Jeong, Myeong-Jin
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.6
    • /
    • pp.307-315
    • /
    • 2000
  • For the autonomy of a mobile robot it is first needed to know its position and orientation. Various methods of estimating the position of a robot have been developed. However, it is still difficult to localize the robot without any initial position or orientation. In this paper we present the method how to make the colored map and how to calculate the position and direction of a robot using the angle data of an omnidirectional image. The wall of the map is rendered with the corresponding color images and the color histograms of images and the coordinates of feature points are stored in the map. Then a mobile robot gets the color omnidirectional image at arbitrary position and orientation, segments it and recognizes objects by multiple color indexing. Using the information of recognized objects robot can have enough feature points and localize itself.

  • PDF

Depth Measurement using an Omnidirectional Stereo Vision System with a Single Camera (단일카메라 전방향 스테레오 비전 시스템을 이용한 거리측정)

  • Yi, Soo-Yeong;Kim, Soon-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.955-959
    • /
    • 2013
  • It is possible to obtain an omnidirectional stereo image via a single camera by using a catadioptric approach with a convex mirror and concave lens. In order to measure three-dimensional distance using the imaging system, the optical parameters of the system are required. In this paper, a calibration procedure to extract the parameters of the imaging system is described. Based on the parameters, experiments are carried out to verify the performance of the three-dimensional distance measurement of a single camera omnidirectional stereo imaging system.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.

Self-Localization of Autonomous Mobile Robot using Multiple Landmarks (다중 표식을 이용한 자율이동로봇의 자기위치측정)

  • 강현덕;조강현
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.1
    • /
    • pp.81-86
    • /
    • 2004
  • This paper describes self-localization of a mobile robot from the multiple candidates of landmarks in outdoor environment. Our robot uses omnidirectional vision system for efficient self-localization. This vision system acquires the visible information of all direction views. The robot uses feature of landmarks whose size is bigger than that of others in image such as building, sculptures, placard etc. Robot uses vertical edges and those merged regions as the feature. In our previous work, we found the problem that landmark matching is difficult when selected candidates of landmarks belonging to region of repeating the vertical edges in image. To overcome these problems, robot uses the merged region of vertical edges. If interval of vertical edges is short then robot bundles them regarding as the same region. Thus, these features are selected as candidates of landmarks. Therefore, the extracted merged region of vertical edge reduces the ambiguity of landmark matching. Robot compares with the candidates of landmark between previous and current image. Then, robot is able to find the same landmark between image sequences using the proposed feature and method. We achieved the efficient self-localization result using robust landmark matching method through the experiments implemented in our campus.

Robust Landmark Matching for Self-localization of Robots from the Multiple Candidates

  • Kang, Hyun-Deok;Jo, Kang-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.1-41
    • /
    • 2002
  • This paper describes a robust landmark matching method to reduce ambiguity of candidate of landmark. General robot system acquires the candidate of landmark through vision sensor in outdoor environment. Our robot uses the omnidirectional vision system to get all around the view. Thus, the robot obtains more candidates of landmark than the conventional vision system. To obtain the candidates of landmark, robot uses the two types of feature. They are vertical edge and merged region of vertical edges. The former is to extract the vertical line of building, street lamp, etc. The latter is to reduce ambiguity of vertical edge in similar region. It is difficult to match the candidates of landmark...

  • PDF