• Title/Summary/Keyword: Omni-directional Camera

Search Result 41, Processing Time 0.025 seconds

Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images (옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템)

  • Kim, Jong-Rok;Lim, Mee-Seub;Lim, Joon-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

Localization of 3D Spatial Information from Single Omni-Directional Image (단일 전방향 영상을 이용한 공간 정보의 측정)

  • Kang Hyun-Deok;Jo Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.7
    • /
    • pp.686-692
    • /
    • 2006
  • This paper shows the calculation of 3D geometric information such as height, direction and distance under the constraints of a catadioptric camera system. The catadioptric camera system satisfies the single viewpoint constraints adopting hyperboloidal mirror. To calculate the 3D information with a single omni-directional image, the points are assumed to lie in perpendicular to the ground. The infinite plane is also detected as a circle from the structure of the mirror and camera. The analytic experiments verify the correctness of theory using real images taken in indoor environments like rooms or corridors. Thus, the experimental results show the applicability to calculate the 3D geometric information using single omni-directional images.

An Object Tracking System Using an Omni-Directional Camera (전방위 카메라를 이용한 객체 추적 시스템)

  • Kim, Jin-Hwan;Ahn, Jae-Kyun;Kim, Chang-Su
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.781-782
    • /
    • 2008
  • An object tracking system, which uses an omni-directional camera, is proposed in this work. First, we construct a mapping table, which describes the relationships between image coordinates and omni-directional angles Then, we develop a surveillance system to detect unexpected objects automatically from omni-directional images. Finally, we generate perspective views for detected objects by using the mapping table. Simulation results demonstrate that the proposed algorithm provides efficient performances.

  • PDF

Omni-directional Image Generation Algorithm with Parametric Image Compensation (변수화된 영상 보정을 통한 전방향 영상 생성 방법)

  • Kim, Yu-Na;Sim, Dong-Gyu
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.396-406
    • /
    • 2006
  • This paper proposes an omni-directional image generation algorithm with parametric image compensation. The algorithm generates an omni-directional image by transforming each planar image to the spherical image on spherical coordinate. Parametric image compensation method is presented in order to compensate vignetting and illumination distortions caused by properties of a camera system and lighting condition. The proposed algorithm can generates realistic and seamless omni-directional video and synthesize any point of view from the stitched omni-directional image on the spherical image. Experimental results show that the proposed omni-directional system with vignetting and illumination compensation is approximately $1{\sim}4dB$ better than that which does not consider the said effects.

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Georeferencing of Indoor Omni-Directional Images Acquired by a Rotating Line Camera (회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱)

  • Oh, So-Jung;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.211-221
    • /
    • 2012
  • To utilize omni-directional images acquired by a rotating line camera for indoor spatial information services, we should register precisely the images with respect to an indoor coordinate system. In this study, we thus develop a georeferencing method to estimate the exterior orientation parameters of an omni-directional image - the position and attitude of the camera at the acquisition time. First, we derive the collinearity equations for the omni-directional image by geometrically modeling the rotating line camera. We then estimate the exterior orientation parameters using the collinearity equations with indoor control points. The experimental results from the application to real data indicate that the exterior orientation parameters is estimated with the precision of 1.4 mm and $0.05^{\circ}$ for the position and attitude, respectively. The residuals are within 3 and 10 pixels in horizontal and vertical directions, respectively. Particularly, the residuals in the vertical direction retain systematic errors mainly due to the lens distortion, which should be eliminated through a camera calibration process. Using omni-directional images georeferenced precisely with the proposed method, we can generate high resolution indoor 3D models and sophisticated augmented reality services based on the models.

Bundle Block Adjustment of Omni-directional Images by a Mobile Mapping System (모바일매핑시스템으로 취득된 전방위 영상의 광속조정법)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.593-603
    • /
    • 2010
  • Most spatial data acquisition systems employing a set of frame cameras may have suffered from their small fields of view and poor base-distance ratio. These limitations can be significantly reduced by employing an omni-directional camera that is capable of acquiring images in every direction. Bundle Block Adjustment (BBA) is one of the existing georeferencing methods to determine the exterior orientation parameters of two or more images. In this study, by extending the concept of the traditional BBA method, we attempt to develop a mathematical model of BBA for omni-directional images. The proposed mathematical model includes three main parts; observation equations based on the collinearity equations newly derived for omni-directional images, stochastic constraints imposed from GPS/INS data and GCPs. We also report the experimental results from the application of our proposed BBA to the real data obtained mainly in urban areas. With the different combinations of the constraints, we applied four different types of mathematical models. With the type where only GCPs are used as the constraints, the proposed BBA can provide the most accurate results, ${\pm}5cm$ of RMSE in the estimated ground point coordinates. In future, we plan to perform more sophisticated lens calibration for the omni-directional camera to improve the georeferencing accuracy of omni-directional images. These georeferenced omni-directional images can be effectively utilized for city modelling, particularly autonomous texture mapping for realistic street view.

Multi-views face detection in Omni-directional camera for non-intrusive iris recognition (비강압적 홍채 인식을 위한 전 방향 카메라에서의 다각도 얼굴 검출)

  • 이현수;배광혁;김재희;박강령
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.115-118
    • /
    • 2003
  • This paper describes a system of detecting multi-views faces and estimating their face poses in an omni-directional camera environment for non-intrusive iris recognition. The paper is divided into two parts; First, moving region is identified by using difference-image information. Then this region is analyzed with face-color information to find the face candidate region. Second part is applying PCA (Principal Component Analysis) to detect multi-view faces, to estimate face pose.

  • PDF

Calibration of a Rotating Stereo Line Camera System for Indoor Precise Mapping (실내 정밀 매핑을 위한 회전식 스테레오 라인 카메라 시스템의 캘리브레이션)

  • Oh, Sojung;Shin, Jinsoo;Kang, Jeongin;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.2
    • /
    • pp.171-182
    • /
    • 2015
  • We propose a camera system to acquire indoor stereo omni-directional images and its calibration method. These images can be utilized for indoor precise mapping and sophisticated imagebased services. The proposed system is configured with a rotating stereo line camera system, providing stereo omni-directional images appropriate to stable stereoscopy and precise derivation of object point coordinates. Based on the projection model, we derive a mathematical model for the system calibration. After performing the system calibration, we can estimate object points with an accuracy of less than ${\pm}16cm$ in indoor space. The proposed system and calibration method will be applied to indoor precise 3D modeling.

A Study of Selecting Sequential Viewpoint and Examining the Effectiveness of Omni-directional Angle Image Information in Grasping the Characteristics of Landscape (경관 특성 파악에 있어서의 시퀀스적 시점장 선정과 전방위 화상정보의 유효성 검증에 관한 연구)

  • Kim, Heung Man;Lee, In Hee
    • KIEAE Journal
    • /
    • v.9 no.2
    • /
    • pp.81-90
    • /
    • 2009
  • Relating to grasping sequential landscape characteristics in consideration of the behavioral characteristics of the subject experiencing visual perception, this study was made on the subject of main walking line section for visitors of three treasures of Buddhist temples. Especially, as a method of obtaining data for grasping sequential visual perception landscape, the researcher employed [momentum sequential viewpoint setup] according to [the interval of pointers arbitrarily] and fisheye-lens-camera photography using the obtained omni-directional angle visual perception information. As a result, in terms of viewpoint selection, factors like approach road form, change in circulation axis, change in the ground surface level, appearance of objects, etc. were verified to make effect, and among these, approach road form and circulation axis change turned out to be the greatest influences. In addition, as a result of reviewing the effectiveness via the subjects, for the sake of qualitative evaluation of landscape components using the VR picture image obtained in the process of acquiring omni-directional angle visual perception information, a positive result over certain values was earned in terms of panoramic vision, scene reproduction, three-dimensional perspective, etc. This convinces us of the possibility to activate the qualitative evaluation of omni-directional angle picture information and the study of landscape through it henceforth.