• Title/Summary/Keyword: extrinsic calibration

Search Result 40, Processing Time 0.024 seconds

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 Hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.596-601
    • /
    • 2000
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

  • PDF

A study on approach of localization problem using landmarks (Landmark를 이용한 localization 문제 접근에 관한 연구)

  • 김태우;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.44-47
    • /
    • 1997
  • Building a reliable mobile robot - one that can navigate without failures for long periods of time - requires that the uncertainty which results from control and sensing is bounded. This paper proposes a new mobile robot localization method using artificial landmarks. For a mobile robot localization, the proposed method uses a camera calibration(only extrinsic parameters). We use the FANUC arc mate to estimate the posture error, and the result shows that the position error is less than 1 cm and the orientation error less than 1 degrees.

  • PDF

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

A New Hand-eye Calibration Technique to Compensate for the Lens Distortion Effect (렌즈왜곡효과를 보상하는 새로운 hand-eye 보정기법)

  • Chung, Hoi-Bum
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.1
    • /
    • pp.172-179
    • /
    • 2002
  • In a robot/vision system, the vision sensor, typically a CCD array sensor, is mounted on the robot hand. The problem of determining the relationship between the camera frame and the robot hand frame is refered to as the hand-eye calibration. In the literature, various methods have been suggested to calibrate camera and for sensor registration. Recently, one-step approach which combines camera calibration and sensor registration is suggested by Horaud & Dornaika. In this approach, camera extrinsic parameters are not need to be determined at all configurations of robot. In this paper, by modifying the camera model and including the lens distortion effect in the perspective transformation matrix, a new one-step approach is proposed in the hand-eye calibration.

An Exact 3D Data Extraction Algorithm For Active Range Sensor using Laser Slit (레이저 슬릿을 사용하는 능동거리 센서의 정확한 3D 데이터 추출 알고리즘)

  • Cha, Y.Y.;Gweon, D.G.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.8
    • /
    • pp.73-85
    • /
    • 1995
  • The sensor system to measure the distance precisely from the center of the sensor system to the obstacle is needed to recognize the surrounding environments, and the sensor system is to be calibrated thoroughly to get the range information exactly. This study covers the calibration of the active range sensor which consists of camera and laser slit emitting device, and provides the equations to get the 3D range data. This can be possible by obtaining the extrinsic parameters of laser slit emitting device through image processing the slits measured during the constant distance intervals and the intrinsic parameters from the calibration of camera. The 3D range data equation derived from the simple geometric assumptions is proved to be applicable to the general cases using the calibration parameters. Also the exact 3D range data were obtained to the object from the real experiment.

  • PDF

A Distortion Correction Method of Wide-Angle Camera Images through the Estimation and Validation of a Camera Model (카메라 모델의 추정과 검증을 통한 광각 카메라 영상의 왜곡 보정 방법)

  • Kim, Kyeong-Im;Han, Soon-Hee;Park, Jeong-Seon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.12
    • /
    • pp.1923-1932
    • /
    • 2013
  • In order to solve the problem of severely distorted images from a wide-angle camera, we propose a calibration method which corrects a radial distortion in wide-angle images by estimation and validation of camera model. First, we estimate a camera model consisting of intrinsic and extrinsic parameters from calibration patterns, where intrinsic parameters are the focal length, the principal point and so on, and extrinsic parameters are the relative position and orientation of calibration pattern from a camera. Next we validate the estimated camera model by re-extracting corner points by inversing the model to images. Finally we correct the distortion of the image using the validated camera model. We confirm that the proposed method can correct the distortion more than 80% by the calibration experiments using the lattice shaped pattern images captured from a general web camera and a wide-angle camera.

Robust Elevator Door Recognition using LRF and Camera (LRF와 카메라를 이용한 강인한 엘리베이터 문 인식)

  • Ma, Seung-Wan;Cui, Xuenan;Lee, Hyung-Ho;Kim, Hyung-Rae;Lee, Jae-Hong;Kim, Hak-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.6
    • /
    • pp.601-607
    • /
    • 2012
  • The recognition of elevator door is needed for mobile service robots to moving between floors in the building. This paper proposed the sensor fusion approach using LRF (Laser Range Finder) and camera to solve the problem. Using the laser scans by the LRF, we extract line segments and detect candidates as the elevator door. Using the image by the camera, the door candidates are verified and selected as real door of the elevator. The outliers are filtered through the verification process. Then, the door state detection is performed by depth analysis within the door. The proposed method uses extrinsic calibration to fuse the LRF and the camera. It gives better results of elevator door recognition compared to the method using LRF only.

On-line Camera Calbration Using the Time-Varying Image Sequence (시변 순차영상을 이용한 On-line 카메라 교정)

  • 김범진;이호순;최성구;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.440-440
    • /
    • 2000
  • In general, camera calibration is consisted of Indoor and Outdoor system. In case of Indoor system, it was optimized experimental condition. However, Outdoor system is different camera parameters for each image that is compared to equaled position. That is, it imply that camera parameters are varied by an environment with light or impulse noise, etc. So we make use of Image sequence because that they provide the more information for each image. In addition to, we use Corresponding line because it has less error than Corresponding point. Corresponding line has also the more information. In this paper, we suggest on-line camera calibration method using the time-varying Image sequence and Corresponding line. So we calculate camera parameters with intrinsic and extrinsic parameters in On-line system.

  • PDF

Virtual Space Calibration for Laser Vision Sensor Using Circular Jig (원형 지그를 이용한 레이저-비젼 센서의 가상 공간 교정에 관한 연구)

  • 김진대;조영식;이재원
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.12
    • /
    • pp.73-79
    • /
    • 2003
  • Recently, the tole-robot operations to an unstructured environment have been widely researched. The human's interaction with the tole-robot system can be used to improve robot operation and performance for an unknown environment. The exact modeling based on real environment is fundamental and important process for this interaction. In this paper, we propose an extrinsic parameter calibration and data augmentation method that only uses a circular jig in the hand-eye laser virtual environment. Compared to other methods, easier estimation and overlay can be done by this algorithm. Experimental results using synthetic graphic demonstrate the usefulness of the proposed algorithm.

Stereo cameras calibration bases on Epipolar Rectification and its Application

  • Chaewieang, Pipat;Thepmanee, Teerawat;Kummool, Sart;Jaruvanawat, Anuchit;Sirisantisamrid, Kaset
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.246-249
    • /
    • 2003
  • The constraints necessary guarantee using the comparison of these extrinsic parameters, which each Rotation matrix and Translation Vector must be equal to the either, except the X-axis Translation Vector. Thus, we can not yet calculate the 3D-range measurement in the end of camera calibration. To minimize this disadvantage, the Epipolar Rectification has been proposed in the literature. This paper aims to present the development of Epipolar Rectification to calibrate Stereo cameras. The required computation of the transformation mapping between points in 3D-space is based on calculating the image point that appears on new image plane by using calibrated parameters. This computation is assumed from the rotating the old ones around their optical center until focal planes becomes coplanar, thereby containing the baseline, and the Z-axis of both camera coordinate to be parallel together. The optical center positions of the new extrinsic parameters are the same as the old camera, whereas the new orientation differs from the old ones by the suitable rotations. The intrinsic parameters are the same for both cameras. So that, after completed calibration process, immediately can calculate the 3D-range measurement. And the rectification determines a transformation of each image plane such that pairs of conjugate Epipolar lines become collinear and parallel to one of the image axis. From the experimental results verify the proposed technique are agreed with the expected specifications.

  • PDF