• Title, Summary, Keyword: camera calibration

Search Result 658, Processing Time 0.042 seconds

Detection of Calibration Patterns for Camera Calibration with Irregular Lighting and Complicated Backgrounds

  • Kang, Dong-Joong;Ha, Jong-Eun;Jeong, Mun-Ho
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.5
    • /
    • pp.746-754
    • /
    • 2008
  • This paper proposes a method to detect calibration patterns for accurate camera calibration under complicated backgrounds and uneven lighting conditions of industrial fields. Required to measure object dimensions, the preprocessing of camera calibration must be able to extract calibration points from a calibration pattern. However, industrial fields for visual inspection rarely provide the proper lighting conditions for camera calibration of a measurement system. In this paper, a probabilistic criterion is proposed to detect a local set of calibration points, which would guide the extraction of other calibration points in a cluttered background under irregular lighting conditions. If only a local part of the calibration pattern can be seen, input data can be extracted for camera calibration. In an experiment using real images, we verified that the method can be applied to camera calibration for poor quality images obtained under uneven illumination and cluttered background.

Camera Calibration when the Accuracies of Camera Model and Data Are Uncertain (카메라 모델과 데이터의 정확도가 불확실한 상황에서의 카메라 보정)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.1
    • /
    • pp.27-34
    • /
    • 2004
  • Camera calibration is an important and fundamental procedure for the application of a vision sensor to 3D problems. Recently many camera calibration methods have been proposed particularly in the area of robot vision. However, the reliability of data used in calibration has been seldomly considered in spite of its importance. In addition, a camera model can not guarantee good results consistently in various conditions. This paper proposes methods to overcome such uncertainty problems of data and camera models as we often encounter them in practical camera calibration steps. By the use of the RANSAC (Random Sample Consensus) algorithm, few data having excessive magnitudes of errors are excluded. Artificial neural networks combined in a two-step structure are trained to compensate for the result by a calibration method of a particular model in a given condition. The proposed methods are useful because they can be employed additionally to most existing camera calibration techniques if needed. We applied them to a linear camera calibration method and could get improved results.

A 2-D Image Camera Calibration using a Mapping Approximation of Multi-Layer Perceptrons (다층퍼셉트론의 정합 근사화에 의한 2차원 영상의 카메라 오차보정)

  • 이문규;이정화
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.4 no.4
    • /
    • pp.487-493
    • /
    • 1998
  • Camera calibration is the process of determining the coordinate relationship between a camera image and its real world space. Accurate calibration of a camera is necessary for the applications that involve quantitative measurement of camera images. However, if the camera plane is parallel or near parallel to the calibration board on which 2 dimensional objects are defined(this is called "ill-conditioned"), existing solution procedures are not well applied. In this paper, we propose a neural network-based approach to camera calibration for 2D images formed by a mono-camera or a pair of cameras. Multi-layer perceptrons are developed to transform the coordinates of each image point to the world coordinates. The validity of the approach is tested with data points which cover the whole 2D space concerned. Experimental results for both mono-camera and stereo-camera cases indicate that the proposed approach is comparable to Tsai's method[8]. Especially for the stereo camera case, the approach works better than the Tsai's method as the angle between the camera optical axis and the Z-axis increases. Therefore, we believe the approach could be an alternative solution procedure for the ill -conditioned camera calibration.libration.

  • PDF

Stereo Calibration Using Support Vector Machine

  • Kim, Se-Hoon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • /
    • pp.250-255
    • /
    • 2003
  • The position of a 3-dimensional(3D) point can be measured by using calibrated stereo camera. To obtain more accurate measurement ,more accurate camera calibration is required. There are many existing methods to calibrate camera. The simple linear methods are usually not accurate due to nonlinear lens distortion. The nonlinear methods are accurate more than linear method, but it increase computational cost and good initial guess is needed. The multi step methods need to know some camera parameters of used camera. Recent years, these explicit model based camera calibration work with the development of more precise camera models involving correction of lens distortion. But these explicit model based camera calibration have disadvantages. So implicit camera calibration methods have been derived. One of the popular implicit camera calibration method is to use neural network. In this paper, we propose implicit stereo camera calibration method for 3D reconstruction using support vector machine. SVM can learn the relationship between 3D coordinate and image coordinate, and it shows the robust property with the presence of noise and lens distortion, results of simulation are shown in section 4.

  • PDF

An Improved Fast Camera Calibration Method for Mobile Terminals

  • Guan, Fang-li;Xu, Ai-jun;Jiang, Guang-yu
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1082-1095
    • /
    • 2019
  • Camera calibration is an important part of machine vision and close-range photogrammetry. Since current calibration methods fail to obtain ideal internal and external camera parameters with limited computing resources on mobile terminals efficiently, this paper proposes an improved fast camera calibration method for mobile terminals. Based on traditional camera calibration method, the new method introduces two-order radial distortion and tangential distortion models to establish the camera model with nonlinear distortion items. Meanwhile, the nonlinear least square L-M algorithm is used to optimize parameters iteration, the new method can quickly obtain high-precise internal and external camera parameters. The experimental results show that the new method improves the efficiency and precision of camera calibration. Terminals simulation experiment on PC indicates that the time consuming of parameter iteration reduced from 0.220 seconds to 0.063 seconds (0.234 seconds on mobile terminals) and the average reprojection error reduced from 0.25 pixel to 0.15 pixel. Therefore, the new method is an ideal mobile terminals camera calibration method which can expand the application range of 3D reconstruction and close-range photogrammetry technology on mobile terminals.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Camera Calibration and Barrel Undistortion for Fisheye Lens (차량용 어안렌즈 카메라 캘리브레이션 및 왜곡 보정)

  • Heo, Joon-Young;Lee, Dong-Wook
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.9
    • /
    • pp.1270-1275
    • /
    • 2013
  • A lot of research about camera calibration and lens distortion for wide-angle lens has been made. Especially, calibration for fish-eye lens which has 180 degree FOV(field of view) or above is more tricky, so existing research employed a huge calibration pattern or even 3D pattern. And it is important that calibration parameters (such as distortion coefficients) are suitably initialized to get accurate calibration results. It can be achieved by using manufacturer information or lease-square method for relatively narrow FOV(135, 150 degree) lens. In this paper, without any previous manufacturer information, camera calibration and barrel undistortion for fish-eye lens with over 180 degree FOV are achieved by only using one calibration pattern image. We applied QR decomposition for initialization and Regularization for optimization. With the result of experiment, we verified that our algorithm can achieve camera calibration and image undistortion successfully.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

A Camera Calibration Method using Several Images for Three Dimensional Measurement (여러 장의 영상을 사용하는 3차원 계측용 카메라 교정방법)

  • Kang, Dong-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.3
    • /
    • pp.224-229
    • /
    • 2007
  • This paper presents a camera calibration method using several images for three dimensional measurement applications such as stereo systems, mobile robots, and visual inspection systems in factories. Conventional calibration methods that use single image suffer from errors related to reference point extraction in image, lens distortion, and numerical analysis of nonlinear optimization. The camera parameter values obtained from images of same camera is not same even though we use same calibration method. The camera parameters that are obtained from several images of different view for a calibration target is usaully not same with large error values and we can not assume a special probabilistic distribution when we estimate the parameter values. In this paper, the median value of camera parameters from several images is used to improve estimation of the camera values in an iterative step with nonlinear optimization. The proposed method is proved by experiments using real images.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.