• Title/Summary/Keyword: camera calibration

Search Result 693, Processing Time 0.025 seconds

New Initialization method for the robust self-calibration of the camera

  • Ha, Jong-Eun;Kang, Dong-Joong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.752-757
    • /
    • 2003
  • Recently, 3D structure recovery through self-calibration of camera has been actively researched. Traditional calibration algorithm requires known 3D coordinates of the control points while self-calibration only requires the corresponding points of images, thus it has more flexibility in real application. In general, self-calibration algorithm results in the nonlinear optimization problem using constraints from the intrinsic parameters of the camera. Thus, it requires initial value for the nonlinear minimization. Traditional approaches get the initial values assuming they have the same intrinsic parameters while they are dealing with the situation where the intrinsic parameters of the camera may change. In this paper, we propose new initialization method using the minimum 2 images. Proposed method is based on the assumption that the least violation of the camera’s intrinsic parameter gives more stable initial value. Synthetic and real experiment shows this result.

  • PDF

Neural Network Based Camera Calibration and 2-D Range Finding (신경회로망을 이용한 카메라 교정과 2차원 거리 측정에 관한 연구)

  • 정우태;고국원;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1994.10a
    • /
    • pp.510-514
    • /
    • 1994
  • This paper deals with an application of neural network to camera calibration with wide angle lens and 2-D range finding. Wide angle lens has an advantage of having wide view angles for mobile environment recognition ans robot eye in hand system. But, it has severe radial distortion. Multilayer neural network is used for the calibration of the camera considering lens distortion, and is trained it by error back-propagation method. MLP can map between camera image plane and plane the made by structured light. In experiments, Calibration of camers was executed with calibration chart which was printed by using laser printer with 300 d.p.i. resolution. High distortion lens, COSMICAR 4.2mm, was used to see whether the neural network could effectively calibrate camera distortion. 2-D range of several objects well be measured with laser range finding system composed of camera, frame grabber and laser structured light. The performance of 3-D range finding system was evaluated through experiments and analysis of the results.

  • PDF

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

The estimation of camera calibration parameters using the properties of vanishing point at the paved and unpaved road (무한원점의 성질을 이용한 포장 및 비포장 도로에서의 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Jeong, Myeong-Hee;Rho, Do-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.178-180
    • /
    • 2006
  • In general, camera calibration has to be gone ahead necessarily to estimate a position and an orientation of the object exactly using a camera. Autonomous land system in order to run a vehicle autonomously needs a camera calibration method appling a camera and various road environment. Camera calibration is to prescribe the confrontation relation between third dimension space and the image plane. It means to find camera calibration parameters. Camera calibration parameters using the paved road and the unpaved road are estimated. The proposed algorithm has been detected through the image processing after obtaining the paved road and the unpaved road. There is able to detect easily edges because the road lanes exist in the raved road. Image processing method is two. One is a method on the paved road. Image is segmentalized using open, dilation, and erosion. The other is a method on the unpaved road. Edges are detected using blur and sharpening. So it has been made use of Hough transformation in order to detect the correct straight line because it has less error than least-square method. In addition to, this thesis has been used vanishing point' principle. an algorithm suggests camera calibration method using Hough transformation and vanishing point. When the algorithm was applied, the result of focal length was about 10.7[mm] and RMS errors of rotation were 0.10913 and 0.11476 ranges. these have the stabilized ranges comparatively. This shows that this algorithm can be applied to camera calibration on the raved and unpaved road.

  • PDF

A Study on the Camera Calibration Using Lens Distortion Model (렌즈의 왜곡 모델을 이용한 카메라 보정에 관한 연구)

  • Dong Min Woo
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.2
    • /
    • pp.56-68
    • /
    • 1994
  • The objective of camera calibration is to determine the internal optical characteristics of camera and the three-dimensional position and orientation of camera with respect to the real world. Calibration procedure for computer vision should be automatical, accurate and applicable to general purpose cameras and lenses. In this paper, we present camera calibration method which meets the above requirements. The algorithm is based on the two-stage method which takes into account lens distortion in the second stage. In this paper, the overdetermined nonlinear system is established in terms of the constraints to all directions and our calibration algorithm is proposed which is constructed by using Marquardt iterations and our calibration algorithm is proposed which is constructed by using Marquardt iteration method in solving nonlinear equations. Experimental results indicate that lens distortion should be taken into consideration for the calibration of the general-purpose lens. With 24 calibration points acquired out of 512$\times$512 image, the proposed algorithm came up with average error of less than 1 pixel and showed a higher accuracy over the conventional two-stage method.

  • PDF

A Study m Camera Calibration Using Artificial Neural Network (신경망을 이용한 카메라 보정에 관한 연구)

  • Jeon, Kyong-Pil;Woo, Dong-Min;Park, Dong-Chul
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1248-1250
    • /
    • 1996
  • The objective of camera calibration is to obtain the correlation between camera image coordinate and 3-D real world coordinate. Most calibration methods are based on the camera model which consists of physical parameters of the camera like position, orientation, focal length, etc and in this case camera calibration means the process of computing those parameters. In this research, we suggest a new approach which must be very efficient because the artificial neural network(ANN) model implicitly contains all the physical parameters, some of which are very difficult to be estimated by the existing calibration methods. Implicit camera calibration which means the process of calibrating a camera without explicitly computing its physical parameters can be used for both 3-D measurement and generation of image coordinates. As training each calibration points having different height, we can find the perspective projection point. The point can be used for reconstruction 3-D real world coordinate having arbitrary height and image coordinate of arbitrary 3-D real world coordinate. Experimental comparison of our method with well-known Tsai's 2 stage method is made to verify the effectiveness of the proposed method.

  • PDF

A New Linear Explicit Camera Calibration Method (새로운 선형의 외형적 카메라 보정 기법)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.66-71
    • /
    • 2014
  • Vision is the most important sensing capability for both men and sensory smart machines, such as intelligent robots. Sensed real 3D world and its 2D camera image can be related mathematically by a process called camera calibration. In this paper, we present a novel linear solution of camera calibration. Unlike most existing linear calibration methods, the proposed technique of this paper can identify camera parameters explicitly. Through the step-by-step procedure of the proposed method, the real physical elements of the perspective projection transformation matrix between 3D points and the corresponding 2D image points can be identified. This explicit solution will be useful for many practical 3D sensing applications including robotics. We verified the proposed method by using various cameras of different conditions.

A Composite Camera Calibration Technique for Thermal Deterioration Diagnosis of Power Distribution Line (배전 선로의 열화 진단을 위한 복합 카메라 보정기법)

  • Jung, Ha-Hyoung;Park, Jin-ha;Lyou, Joon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.6
    • /
    • pp.463-469
    • /
    • 2016
  • This paper presents a composite camera calibration method to determine thermal degradation of power distribution equipment by combining an infrared (IR) camera and a color camera. A calibration jig was first constructed to match the properties of the two cameras. Our calibration and visualization techniques allow for the display of two images, one from the color camera and the other from the IR camera with different field of views (FOVs), on the screen at the same time. To confirm its validity, several case studies have been developed to analyze thermal deterioration limits of indoor and outdoor power distribution facilities.

Combined Static and Dynamic Platform Calibration for an Aerial Multi-Camera System

  • Cui, Hong-Xia;Liu, Jia-Qi;Su, Guo-Zhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2689-2708
    • /
    • 2016
  • Multi-camera systems which integrate two or more low-cost digital cameras are adopted to reach higher ground coverage and improve the base-height ratio in low altitude remote sensing. To guarantee accurate multi-camera integration, the geometric relationship among cameras must be determined through platform calibration techniques. This paper proposed a combined two-step platform calibration method. In the first step, the static platform calibration was conducted based on the stable relative orientation constraint and convergent conditions among cameras in static environments. In the second step, a dynamic platform self-calibration approach was proposed based on not only tie points but also straight lines in order to correct the small change of the relative relationship among cameras during dynamic flight. Experiments based on the proposed two-step platform calibration method were carried out with terrestrial and aerial images from a multi-camera system combined with four consumer-grade digital cameras onboard an unmanned aerial vehicle. The experimental results have shown that the proposed platform calibration approach is able to compensate the varied relative relationship during flight, acquiring the mosaicing accuracy of virtual images smaller than 0.5pixel. The proposed approach can be extended for calibrating other low-cost multi-camera system without rigorously mechanical structure.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.