• 제목/요약/키워드: omni-directional image

검색결과 54건 처리시간 0.028초

변수화된 영상 보정을 통한 전방향 영상 생성 방법 (Omni-directional Image Generation Algorithm with Parametric Image Compensation)

  • 김유나;심동규
    • 방송공학회논문지
    • /
    • 제11권4호
    • /
    • pp.396-406
    • /
    • 2006
  • 본 논문에서는 변수화된 영상 보정을 통한 전방향 영상 (Omni-directional image) 생성 방법을 제안한다. 제안한 방법은 모든 방향의 영상을 포함하고 영상 합성 시에 발생하는 왜곡과 에러를 최소화하는 전방향 영상을 생성하기 위해, 원형 좌표계를 기반으로 평면 영상을 원형 영상으로 변환한다. 카메라로 획득한 각각의 영상은 카메라의 시스템과 외부 환경 조건에 따라 발생하는 비그네이팅 효과와 조명 변화를 포함하고 있으므로, 최적의 비그네이팅 및 조명 변수를 추출하여 자연스러운 전방향 영상 생성을 위한 영상 보정 과정을 수행한다. 실험 결과에서는 제안한 방법에 따라 생성된 전방향 영상을 확인하고 비그네이팅 효과 및 조명 변화를 고려하지 않은 방법과 제안한 방법의 성능을 비교, 분석하여 제안한 방법이 $1{\sim}4dB$ 정도 더 효과적임을 보인다.

능동 전방향 거리 측정 시스템을 이용한 이동로봇의 위치 추정 (Localization of Mobile Robot Using Active Omni-directional Ranging System)

  • 류지형;김진원;이수영
    • 제어로봇시스템학회논문지
    • /
    • 제14권5호
    • /
    • pp.483-488
    • /
    • 2008
  • An active omni-directional raging system using an omni-directional vision with structured light has many advantages compared to the conventional ranging systems: robustness against external illumination noise because of the laser structured light and computational efficiency because of one shot image containing $360^{\circ}$ environment information from the omni-directional vision. The omni-directional range data represents a local distance map at a certain position in the workspace. In this paper, we propose a matching algorithm for the local distance map with the given global map database, thereby to localize a mobile robot in the global workspace. Since the global map database consists of line segments representing edges of environment object in general, the matching algorithm is based on relative position and orientation of line segments in the local map and the global map. The effectiveness of the proposed omni-directional ranging system and the matching are verified through experiments.

Omni-directional Image에서의 이동객체 좌표 보정 및 추적 (Coordinate Calibration and Object Tracking of the ODVS)

  • 박용민;남현정;차의영
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2005년도 추계종합학술대회
    • /
    • pp.408-413
    • /
    • 2005
  • 본 논문에서는 Omni-directional Image에서 이동 객체를 추적하고, 이동 객체의 실제 거리 좌표 상에서의 위치를 추정하기 위한 3차원 포물면 좌표 변환 방법을 제안하고 있다. 실시간으로 이동 객체를 추출하기 위해 제안된 색상 정보(Hue) 히스토그램 정합 기법을 사용하였다. 색상 정보 히스토그램 정합 기법에 의해 추출된 이동 객체의 이미지 상의 좌표는 카메라의 높이, 초점 거리 등과 같은 환경 변수와 함께 제안하는 3차원 포물면 좌표 변환 함수에 대입하여 이동 객체의 실제 거리 좌표를 추정한다. 실험 결과를 통해 본 논문에서 제안하는 방법은 빛의 밝기에 비교적 강인하게 이동 객체를 추출할 수 있었고, Omni-directional Image에서 이동 객체의 실제 거리 좌표를 추정할 수 있었다.

  • PDF

단일 전방향 영상을 이용한 공간 정보의 측정 (Localization of 3D Spatial Information from Single Omni-Directional Image)

  • 강현덕;조강현
    • 제어로봇시스템학회논문지
    • /
    • 제12권7호
    • /
    • pp.686-692
    • /
    • 2006
  • This paper shows the calculation of 3D geometric information such as height, direction and distance under the constraints of a catadioptric camera system. The catadioptric camera system satisfies the single viewpoint constraints adopting hyperboloidal mirror. To calculate the 3D information with a single omni-directional image, the points are assumed to lie in perpendicular to the ground. The infinite plane is also detected as a circle from the structure of the mirror and camera. The analytic experiments verify the correctness of theory using real images taken in indoor environments like rooms or corridors. Thus, the experimental results show the applicability to calculate the 3D geometric information using single omni-directional images.

옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템 (Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images)

  • 김종록;임미섭;임준홍
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

경관 특성 파악에 있어서의 시퀀스적 시점장 선정과 전방위 화상정보의 유효성 검증에 관한 연구 (A Study of Selecting Sequential Viewpoint and Examining the Effectiveness of Omni-directional Angle Image Information in Grasping the Characteristics of Landscape)

  • 김홍만;이인희
    • KIEAE Journal
    • /
    • 제9권2호
    • /
    • pp.81-90
    • /
    • 2009
  • Relating to grasping sequential landscape characteristics in consideration of the behavioral characteristics of the subject experiencing visual perception, this study was made on the subject of main walking line section for visitors of three treasures of Buddhist temples. Especially, as a method of obtaining data for grasping sequential visual perception landscape, the researcher employed [momentum sequential viewpoint setup] according to [the interval of pointers arbitrarily] and fisheye-lens-camera photography using the obtained omni-directional angle visual perception information. As a result, in terms of viewpoint selection, factors like approach road form, change in circulation axis, change in the ground surface level, appearance of objects, etc. were verified to make effect, and among these, approach road form and circulation axis change turned out to be the greatest influences. In addition, as a result of reviewing the effectiveness via the subjects, for the sake of qualitative evaluation of landscape components using the VR picture image obtained in the process of acquiring omni-directional angle visual perception information, a positive result over certain values was earned in terms of panoramic vision, scene reproduction, three-dimensional perspective, etc. This convinces us of the possibility to activate the qualitative evaluation of omni-directional angle picture information and the study of landscape through it henceforth.

전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적 (Moving Target Tracking using Vision System for an Omni-directional Wheel Robot)

  • 김산;김동환
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

전방위 영상을 이용한 이동 로봇의 전역 위치 인식 (Global Localization of Mobile Robots Using Omni-directional Images)

  • 한우섭;민승기;노경식;윤석준
    • 대한기계학회논문집A
    • /
    • 제31권4호
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.

이중원뿔 투영을 이용한 거리의 추정 (Depth estimation by using a double conic projection)

  • 김완수;조형석;김성권
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.1411-1414
    • /
    • 1997
  • It is essential to obtain a distane informaion in order to completely execute assembly tasks such as a grasping and an insertion. In this paper, we propose a method estimating a measurement distance from a sensor to an object through using the omni-directional image sensing system for assembly(OISSA) and show its features and feasibility by a computer simulation. The method, utilizing a forwarded motion stereo technique, is simple to search the corresponding points and possible to immediatiely obtain a three-dimensional 2.pi.-shape information.

  • PDF

전방향 능동 거리 센서를 이용한 2차원 거리 측정 (Two-Dimensional Depth Data Measurement using an Active Omni-Directional Range Sensor)

  • 정인수;조형석
    • 제어로봇시스템학회논문지
    • /
    • 제5권4호
    • /
    • pp.437-445
    • /
    • 1999
  • Most autonomous mobile robots view only things in front of then, and as a result, they may collide with objects moving from the side or behind. To overcome this problem, an active omni-directional range sensor system has been built that can obtain an omni-directional depth map through the use of a laser conic plane and a conic mirror. In the navigation of the mobile robot, the proposed sensor system produces a laser conic plane by rotating the laser point source at high speed: this creates a two-dimensional depth map, in real time, once an image is captured. The results obtained from experiment show that the proposed sensor system is very efficient, and can be utilized for navigation of mobile robot in an unknown environment.

  • PDF