• Title/Summary/Keyword: extrinsic calibration

Search Result 40, Processing Time 0.021 seconds

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

Automatic Registration of Two Parts using Robot with Multiple 3D Sensor Systems

  • Ha, Jong-Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.4
    • /
    • pp.1830-1835
    • /
    • 2015
  • In this paper, we propose an algorithm for the automatic registration of two rigid parts using multiple 3D sensor systems on a robot. Four sets of structured laser stripe system consisted of a camera and a visible laser stripe is used for the acquisition of 3D information. Detailed procedures including extrinsic calibration among four 3D sensor systems and hand/eye calibration of 3D sensing system on robot arm are presented. We find a best pose using search-based pose estimation algorithm where cost function is proposed by reflecting geometric constraints between sensor systems and target objects. A pose with minimum gap and height difference is found by greedy search. Experimental result using demo system shows the robustness and feasibility of the proposed algorithm.

Zoom Lens Calibration for a Video Measuring System (컴퓨터 비젼을 이용한 정밀 측정 장비의 줌 렌즈 캘리브레이션)

  • Hahn, Kwang-Soo;Choi, Joon-Soo;Choi, Ki-Won
    • Journal of Internet Computing and Services
    • /
    • v.6 no.5
    • /
    • pp.57-71
    • /
    • 2005
  • Precise visual measurement applications, like video measuring system (VMS), use camera systems with motorized zoom lens for fast and efficient measurement. In this paper, we introduce an efficient calibration method for zoom lens of VMS controlled by servo motor. For the automated zoom lens calibration of the VMS, only zoom lens setting needs to be calibrated and parameters calibrated by zoom lens settings are image center and pixel size changed by zoom levels, The extrinsic parameters, like focus and iris, do not need to be calibrated since the parameters are usually fixed, It needs a lot of time and effort to calibrate the camera for ali the different zoom levels. In this paper, we also propose an efficient and fast zoom lens calibration method, which calculates the calibration parameters of the zoom lens settings for the minimum number of zoom levels and estimates other parameters for the uncalculated zoom levels using the interpolation of the calculated parameter values.

  • PDF

Improved LiDAR-Camera Calibration Using Marker Detection Based on 3D Plane Extraction

  • Yoo, Joong-Sun;Kim, Do-Hyeong;Kim, Gon-Woo
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.6
    • /
    • pp.2530-2544
    • /
    • 2018
  • In this paper, we propose an enhanced LiDAR-camera calibration method that extracts the marker plane from 3D point cloud information. In previous work, we estimated the straight line of each board to obtain the vertex. However, the errors in the point information in relation to the z axis were not considered. These errors are caused by the effects of user selection on the board border. Because of the nature of LiDAR, the point information is separated in the horizontal direction, causing the approximated model of the straight line to be erroneous. In the proposed work, we obtain each vertex by estimating a rectangle from a plane rather than obtaining a point from each straight line in order to obtain a vertex more precisely than the previous study. The advantage of using planes is that it is easier to select the area, and the most point information on the board is available. We demonstrated through experiments that the proposed method could be used to obtain more accurate results compared to the performance of the previous method.

A Calibration Method for Multimodal dual Camera Environment (멀티모달 다중 카메라의 영상 보정방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2138-2144
    • /
    • 2015
  • Multimodal dual camera system has a stereo-like configuration equipped with an infrared thermal and optical camera. This paper presents stereo calibration methods on multimodal dual camera system using a target board that can be recognized by both thermal and optical camera. While a typical stereo calibration method usually performed with extracted intrinsic and extrinsic camera parameter, consecutive image processing steps were applied in this paper as follows. Firstly, the corner points were detected from the two images, and then the pixel error rate, the size difference, the rotation degree between the two images were calculated by using the pixel coordinates of detected corner points. Secondly, calibration was performed with the calculated values via affine transform. Lastly, result image was reconstructed with mapping regions on calibrated image.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Accurate Camera Calibration Method for Multiview Stereoscopic Image Acquisition (다중 입체 영상 획득을 위한 정밀 카메라 캘리브레이션 기법)

  • Kim, Jung Hee;Yun, Yeohun;Kim, Junsu;Yun, Kugjin;Cheong, Won-Sik;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.919-927
    • /
    • 2019
  • In this paper, we propose an accurate camera calibration method for acquiring multiview stereoscopic images. Generally, camera calibration is performed by using checkerboard structured patterns. The checkerboard pattern simplifies feature point extraction process and utilizes previously recognized lattice structure, which results in the accurate estimation of relations between the point on 2-dimensional image and the point on 3-dimensional space. Since estimation accuracy of camera parameters is dependent on feature matching, accurate detection of checkerboard corner is crucial. Therefore, in this paper, we propose the method that performs accurate camera calibration method through accurate detection of checkerboard corners. Proposed method detects checkerboard corner candidates by utilizing 1-dimensional gaussian filters with succeeding corner refinement process to remove outliers from corner candidates and accurately detect checkerboard corners in sub-pixel unit. In order to verify the proposed method, we check reprojection errors and camera location estimation results to confirm camera intrinsic parameters and extrinsic parameters estimation accuracy.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.