• Title/Summary/Keyword: Fisheye lens calibration

Search Result 13, Processing Time 0.031 seconds

Calibration of Fisheye Lens Images Using a Spiral Pattern and Compensation for Geometric Distortion (나선형 패턴을 사용한 어안렌즈 영상 교정 및 기하학적 왜곡 보정)

  • Kim, Seon-Yung;Yoon, In-Hye;Kim, Dong-Gyun;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.16-22
    • /
    • 2012
  • In this paper, we present spiral pattern which suits for optical simulator to calibrate fisheye lens and compensate geometric distortion. Using spiral pattern, we present calibration without mathematical modeling in advance. Proposed spiral pattern used to input image of optical simulator. Using fisheye lens image, we calibrate a fisheye lens by matching geometrically moved dots to corresponding original dots which leads not to need mathematical modeling. Proposed algorithm calibrates using dot matching which matches spiral pattern image dot to distorted image dot. And this algorithm does not need modeling in advance so it is effective. Proposed algorithm is enabled at processing of pattern recognition which has to get the exact information using fisheye lens for digital zooming. And this makes possible at compensation of geometric distortion and calibration of fisheye lens image applying in various image processing.

Camera Calibration and Barrel Undistortion for Fisheye Lens (차량용 어안렌즈 카메라 캘리브레이션 및 왜곡 보정)

  • Heo, Joon-Young;Lee, Dong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.9
    • /
    • pp.1270-1275
    • /
    • 2013
  • A lot of research about camera calibration and lens distortion for wide-angle lens has been made. Especially, calibration for fish-eye lens which has 180 degree FOV(field of view) or above is more tricky, so existing research employed a huge calibration pattern or even 3D pattern. And it is important that calibration parameters (such as distortion coefficients) are suitably initialized to get accurate calibration results. It can be achieved by using manufacturer information or lease-square method for relatively narrow FOV(135, 150 degree) lens. In this paper, without any previous manufacturer information, camera calibration and barrel undistortion for fish-eye lens with over 180 degree FOV are achieved by only using one calibration pattern image. We applied QR decomposition for initialization and Regularization for optimization. With the result of experiment, we verified that our algorithm can achieve camera calibration and image undistortion successfully.

Fisheye Image Correction with Ellipsoid Model (타원체 모형을 통한 어안 영상 보정)

  • Kim, Hyun-Tae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.10 no.2
    • /
    • pp.177-182
    • /
    • 2015
  • General method for correcting the distortion caused by the characteristic of the fish-eye lens may be classified in two ways. The first method is a calibration method using a mathematical model taking into account the characteristics of the lens, the second method is a method using only the distortion correction image, regardless of the lens. When considering the characteristics of the lens, calibration equation can be calculated geometrically from the relationship between the three-dimensional real-world coordinates and two-dimensional image coordinates and the parameters of lens. However, it is not suitable for ellipsoid type lens, because of existing research papers have been corrected on the spherical-type fisheye lens. In this paper, we propose a method for correcting geometrically using fish-eye lens as an ellipsoid model. Through a calibration picture, it can be seen that the proposed method is valid.

Verification Method of Omnidirectional Camera Model by Projected Contours (사영된 컨투어를 이용한 전방향 카메라 모델의 검증 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.994-999
    • /
    • 2007
  • 전방향(omnidirectional) 카메라 시스템은 보다 적은 수의 영상으로부터 주변 장면(scene)에 대한 많은 정보를 취득할 수 있는 장점이 있기 때문에 전방향 영상을 이용한 자동교정(self-calibration)과 3차원 재구성 등의 연구가 활발히 진행되고 있다. 본 논문에서는 기존에 제안된 교정 방법들을 이용하여 추정된 사영모델(projection model)의 정확성을 검증하기 위한 새로운 방법이 제안된다. 실 세계에서 다양하게 존재하는 직선 성분들은 전방향 영상에 컨투어(contour)의 형태로 사영되며, 사영모델과 컨투어의 양 끝점 좌표 값을 이용하여 그 궤적을 추정할 수 있다. 추정된 컨투어의 궤적과 영상에 존재하는 컨투어와의 거리 오차(distance error)로부터 전방향 카메라의 사영모델의 정확성을 검증할 수 있다. 제안된 방법의 성능을 평가하기 위해서 구 맵핑(spherical mapping)된 합성(synthetic) 영상과 어안렌즈(fisheye lens)로 취득한 실제 영상에 대해 제안된 알고리즘을 적용하여 사영모델의 정확성을 판단하였다.

  • PDF

A Study on Fisheye Lens based Features on the Ceiling for Self-Localization (실내 환경에서 자기위치 인식을 위한 어안렌즈 기반의 천장의 특징점 모델 연구)

  • Choi, Chul-Hee;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.442-448
    • /
    • 2011
  • There are many research results about a self-localization technique of mobile robot. In this paper we present a self-localization technique based on the features of ceiling vision using a fisheye lens. The features obtained by SIFT(Scale Invariant Feature Transform) can be used to be matched between the previous image and the current image and then its optimal function is derived. The fisheye lens causes some distortion on its images naturally. So it must be calibrated by some algorithm. We here propose some methods for calibration of distorted images and design of a geometric fitness model. The proposed method is applied to laboratory and aile environment. We show its feasibility at some indoor environment.

Vision-based Mobile Robot Localization and Mapping using fisheye Lens (어안렌즈를 이용한 비전 기반의 이동 로봇 위치 추정 및 매핑)

  • Lee Jong-Shill;Min Hong-Ki;Hong Seung-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.256-262
    • /
    • 2004
  • A key component of an autonomous mobile robot is to localize itself and build a map of the environment simultaneously. In this paper, we propose a vision-based localization and mapping algorithm of mobile robot using fisheye lens. To acquire high-level features with scale invariance, a camera with fisheye lens facing toward to ceiling is attached to the robot. These features are used in mP building and localization. As a preprocessing, input image from fisheye lens is calibrated to remove radial distortion and then labeling and convex hull techniques are used to segment ceiling and wall region for the calibrated image. At the initial map building process, features we calculated for each segmented region and stored in map database. Features are continuously calculated for sequential input images and matched to the map. n some features are not matched, those features are added to the map. This map matching and updating process is continued until map building process is finished, Localization is used in map building process and searching the location of the robot on the map. The calculated features at the position of the robot are matched to the existing map to estimate the real position of the robot, and map building database is updated at the same time. By the proposed method, the elapsed time for map building is within 2 minutes for 50㎡ region, the positioning accuracy is ±13cm and the error about the positioning angle of the robot is ±3 degree for localization.

  • PDF

Geometric Correction of Vehicle Fish-eye Lens Images (차량용 어안렌즈영상의 기하학적 왜곡 보정)

  • Kim, Sung-Hee;Cho, Young-Ju;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.601-605
    • /
    • 2009
  • Due to the fact that fish-eye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. However, vehicle fish-eye cameras have diagonal output images rather than circular images and have asymmetric distortion beyond the horizontal angle. In this paper, we introduce a camera model and metric calibration method for vehicle cameras which uses feature points of the image. And undistort the input image through a perspective projection, where straight lines should appear straight. The method fitted vehicle fish-eye lens with different field of views.

  • PDF

3D Analysis of Scene and Light Environment Reconstruction for Image Synthesis (영상합성을 위한 3D 공간 해석 및 조명환경의 재구성)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.45-50
    • /
    • 2006
  • In order to generate a photo-realistic synthesized image, we should reconstruct light environment by 3D analysis of scene. This paper presents a novel method for identifying the positions and characteristics of the lights-the global and local lights-in the real image, which are used to illuminate the synthetic objects. First, we generate High Dynamic Range(HDR) radiance map from omni-directional images taken by a digital camera with a fisheye lens. Then, the positions of the camera and light sources in the scene are identified automatically from the correspondences between images without a priori camera calibration. Types of the light sources are classified according to whether they illuminate the whole scene, and then we reconstruct 3D illumination environment. Experimental results showed that the proposed method with distributed ray tracing makes it possible to achieve photo-realistic image synthesis. It is expected that animators and lighting experts for the film and animation industry would benefit highly from it.

  • PDF

Image Data Loss Minimized Geometric Correction for Asymmetric Distortion Fish-eye Lens (비대칭 왜곡 어안렌즈를 위한 영상 손실 최소화 왜곡 보정 기법)

  • Cho, Young-Ju;Kim, Sung-Hee;Park, Ji-Young;Son, Jin-Woo;Lee, Joong-Ryoul;Kim, Myoung-Hee
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.1
    • /
    • pp.23-31
    • /
    • 2010
  • Due to the fact that fisheye lens can provide super wide angles with the minimum number of cameras, field-of-view over 180 degrees, many vehicles are attempting to mount the camera system. Not only use the camera as a viewing system, but also as a camera sensor, camera calibration should be preceded, and geometrical correction on the radial distortion is needed to provide the images for the driver's assistance. In this thesis, we introduce a geometric correction technique to minimize the loss of the image data from a vehicle fish-eye lens having a field of view over $180^{\circ}$, and a asymmetric distortion. Geometric correction is a process in which a camera model with a distortion model is established, and then a corrected view is generated after camera parameters are calculated through a calibration process. First, the FOV model to imitate a asymmetric distortion configuration is used as the distortion model. Then, we need to unify the axis ratio because a horizontal view of the vehicle fish-eye lens is asymmetrically wide for the driver, and estimate the parameters by applying a non-linear optimization algorithm. Finally, we create a corrected view by a backward mapping, and provide a function to optimize the ratio for the horizontal and vertical axes. This minimizes image data loss and improves the visual perception when the input image is undistorted through a perspective projection.

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF