• Title/Summary/Keyword: Focal length estimation

Search Result 25, Processing Time 0.026 seconds

A Study on the Effect of Weighting Matrix of Robot Vision Control Algorithm in Robot Point Placement Task (점 배치 작업 시 제시된 로봇 비젼 제어알고리즘의 가중행렬의 영향에 관한 연구)

  • Son, Jae-Kyung;Jang, Wan-Shik;Sung, Yoon-Gyung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.9
    • /
    • pp.986-994
    • /
    • 2012
  • This paper is concerned with the application of the vision control algorithm with weighting matrix in robot point placement task. The proposed vision control algorithm involves four models, which are the robot kinematic model, vision system model, the parameter estimation scheme and robot joint angle estimation scheme. This proposed algorithm is to make the robot move actively, even if relative position between camera and robot, and camera's focal length are unknown. The parameter estimation scheme and joint angle estimation scheme in this proposed algorithm have form of nonlinear equation. In particular, the joint angle estimation model includes several restrictive conditions. For this study, the weighting matrix which gave various weighting near the target was applied to the parameter estimation scheme. Then, this study is to investigate how this change of the weighting matrix will affect the presented vision control algorithm. Finally, the effect of the weighting matrix of robot vision control algorithm is demonstrated experimentally by performing the robot point placement.

Signal Level Analysis of a Camera System for Satellite Application

  • Kong, Jong-Pil;Kim, Bo-Gwan
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.220-223
    • /
    • 2008
  • A camera system for the satellite application performs the mission of observation by measuring radiated light energy from the target on the earth. As a development stage of the system, the signal level analysis by estimating the number of electron collected in a pixel of an applied CCD is a basic tool for the performance analysis like SNR as well as the data path design of focal plane electronic. In this paper, two methods are presented for the calculation of the number of electrons for signal level analysis. One method is a quantitative assessment based on the CCD characteristics and design parameters of optical module of the system itself in which optical module works for concentrating the light energy onto the focal plane where CCD is located to convert light energy into electrical signal. The other method compares the design\ parameters of the system such as quantum efficiency, focal length and the aperture size of the optics in comparison with existing camera system in orbit. By this way, relative count of electrons to the existing camera system is estimated. The number of electrons, as signal level of the camera system, calculated by described methods is used to design input circuits of AD converter for interfacing the image signal coming from the CCD module in the focal plane electronics. This number is also used for the analysis of the signal level of the CCD output which is critical parameter to design data path between CCD and A/D converter. The FPE(Focal Plane Electronics) designer should decide whether the dividing-circuit is necessary or not between them from the analysis. If it is necessary, the optimized dividing factor of the level should be implemented. This paper describes the analysis of the electron count of a camera system for a satellite application and then of the signal level for the interface design between CCD and A/D converter using two methods. One is a quantitative assessment based on the design parameters of the camera system, the other method compares the design parameters in comparison with those of the existing camera system in orbit for relative counting of the electrons and the signal level estimation. Chapter 2 describes the radiometry of the camera system of a satellite application to show equations for electron counting, Chapter 3 describes a camera system briefly to explain the data flow of imagery information from CCD and Chapter 4 explains the two methods for the analysis of the number of electrons and the signal level. Then conclusion is made in chapter 5.

  • PDF

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

A Study on the Estimation of Camera Calibration Parameters using Cooresponding Points Method (점 대응 기법을 이용한 카메라의 교정 파라미터 추정에 관한 연구)

  • Choi, Seong-Gu;Go, Hyun-Min;Rho, Do-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.4
    • /
    • pp.161-167
    • /
    • 2001
  • Camera calibration is very important problem in 3D measurement using vision system. In this paper is proposed the simple method for camera calibration. It is designed that uses the principle of vanishing points and the concept of corresponding points extracted from the parallel line pairs. Conventional methods are necessary for 4 reference points in one frame. But we proposed has need for only 2 reference points to estimate vanishing points. It has to calculate camera parameters, focal length, camera attitude and position. Our experiment shows the validity and the usability from the result that absolute error of attitude and position is in $10^{-2}$.

  • PDF

Intrinsic Camera Calibration Based on Radical Center Estimation (근심 추정 기반의 내부 카메라 파라미터 보정)

  • 이동훈;김복동;정순기
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.742-744
    • /
    • 2004
  • 본 논문에서는 두 개의 직교하는 소실점(Orthogonal Vanishing Points)을 이용하여 카메라의 내부 파라미터를 추정하기 위한 방법을 제안한다. 카메라 보정(camera calibration)은 2차원 영상으로부터 3차원 정보를 얻기 위한 중요한 단계이다. 기존의 소실점을 이용한 대부분의 방법들은 세 개의 직교하는 소실점을 사용하여 파라미터론 추정하지만, 실제 영상에서는 세 개의 직교 소실점을 포함하는 영상을 획득하는 것은 어려운 문제이다 따라서 본 논문에서는 2개의 직교 소실점을 사용하여 카메라 U부 보정을 위한 기하적이고 직관적인 새로운 방법을 제안한다. 주점(principal point)과 초점거리(focal length)는 Thales의 이론을 기초한 기하학적 제약사항으로부터 다중 반구(multiple hemispheres)들의 관계로부터 유도된다.

  • PDF

Theoretical Limits Analysis of Indoor Positioning System Using Visible Light and Image Sensor

  • Zhao, Xiang;Lin, Jiming
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.560-567
    • /
    • 2016
  • To solve the problem of parameter optimization in image sensor-based visible light positioning systems, theoretical limits for both the location and the azimuth angle of the image sensor receiver (ISR) are calculated. In the case of a typical indoor scenario, maximum likelihood estimations for both the location and the azimuth angle of the ISR are first deduced. The Cramer-Rao Lower Bound (CRLB) is then derived, under the condition that the observation values of the image points are affected by white Gaussian noise. For typical parameters of LEDs and image sensors, simulation results show that accurate estimates for both the location and azimuth angle can be achieved, with positioning errors usually on the order of centimeters and azimuth angle errors being less than $1^{\circ}$. The estimation accuracy depends on the focal length of the lens and on the pixel size and frame rate of the ISR, as well as on the number of transmitters used.

A Study on the Robot Vision Control Schemes of N-R and EKF Methods for Tracking the Moving Targets (이동 타겟 추적을 위한 N-R과 EKF방법의 로봇비젼제어기법에 관한 연구)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.23 no.5
    • /
    • pp.485-497
    • /
    • 2014
  • This paper presents the robot vision control schemes based on the Newton-Raphson (N-R) and the Extended Kalman Filter (EKF) methods for the tracking of moving targets. The vision system model used in this study involves the six camera parameters. The difference is that refers to the uncertainty of the camera's orientation and focal length, and refers to the unknown relative position between the camera and the robot. Both N-R and EKF methods are employed towards the estimation of the six camera parameters. Based on the these six parameters estimated using three cameras, the robot's joint angles are computed with respect to the moving targets, using both N-R and EKF methods. The two robot vision control schemes are tested by tracking the moving target experimentally. Given the experimental results, the two robot control schemes are compared in order to evaluate their strengths and weaknesses.

Calibration Method of Plenoptic Camera using CCD Camera Model (CCD 카메라 모델을 이용한 플렌옵틱 카메라의 캘리브레이션 방법)

  • Kim, Song-Ran;Jeong, Min-Chang;Kang, Hyun-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.261-269
    • /
    • 2018
  • This paper presents a convenient method to estimate the internal parameters of plenoptic camera using CCD(charge-coupled device) camera model. The images used for plenoptic camera calibration generally use the checkerboard pattern used in CCD camera calibration. Based on the CCD camera model, the determinant of the plenoptic camera model can be derived through the relationship with the plenoptic camera model. We formulate four equations that express the focal length, the principal point, the baseline, and distance between the virtual camera and the object. By performing a nonlinear optimization technique, we solve the equations to estimate the parameters. We compare the estimation results with the actual parameters and evaluate the reprojection error. Experimental results show that the MSE(mean square error) is 0.309 and estimation values are very close to actual values.

A Distortion Correction Method of Wide-Angle Camera Images through the Estimation and Validation of a Camera Model (카메라 모델의 추정과 검증을 통한 광각 카메라 영상의 왜곡 보정 방법)

  • Kim, Kyeong-Im;Han, Soon-Hee;Park, Jeong-Seon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1923-1932
    • /
    • 2013
  • In order to solve the problem of severely distorted images from a wide-angle camera, we propose a calibration method which corrects a radial distortion in wide-angle images by estimation and validation of camera model. First, we estimate a camera model consisting of intrinsic and extrinsic parameters from calibration patterns, where intrinsic parameters are the focal length, the principal point and so on, and extrinsic parameters are the relative position and orientation of calibration pattern from a camera. Next we validate the estimated camera model by re-extracting corner points by inversing the model to images. Finally we correct the distortion of the image using the validated camera model. We confirm that the proposed method can correct the distortion more than 80% by the calibration experiments using the lattice shaped pattern images captured from a general web camera and a wide-angle camera.

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.