• Title/Summary/Keyword: 3-D coordinate calibration

Search Result 69, Processing Time 0.029 seconds

A Study m Camera Calibration Using Artificial Neural Network (신경망을 이용한 카메라 보정에 관한 연구)

  • Jeon, Kyong-Pil;Woo, Dong-Min;Park, Dong-Chul
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1248-1250
    • /
    • 1996
  • The objective of camera calibration is to obtain the correlation between camera image coordinate and 3-D real world coordinate. Most calibration methods are based on the camera model which consists of physical parameters of the camera like position, orientation, focal length, etc and in this case camera calibration means the process of computing those parameters. In this research, we suggest a new approach which must be very efficient because the artificial neural network(ANN) model implicitly contains all the physical parameters, some of which are very difficult to be estimated by the existing calibration methods. Implicit camera calibration which means the process of calibrating a camera without explicitly computing its physical parameters can be used for both 3-D measurement and generation of image coordinates. As training each calibration points having different height, we can find the perspective projection point. The point can be used for reconstruction 3-D real world coordinate having arbitrary height and image coordinate of arbitrary 3-D real world coordinate. Experimental comparison of our method with well-known Tsai's 2 stage method is made to verify the effectiveness of the proposed method.

  • PDF

Camera Calibration Using the Fuzzy Model (퍼지 모델을 이용한 카메라 보정에 관한 연구)

  • 박민기
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.5
    • /
    • pp.413-418
    • /
    • 2001
  • In this paper, we propose a new camera calibration method which is based on a fuzzy model instead of a physical camera model of the conventional method. The camera calibration is to determine the correlation between camera image coordinate and real world coordinate. The camera calibration method using a fuzzy model can not estimate camera physical parameters which can be obtained in the conventional methods. However, the proposed method is very simple and efficient because it can determine the correlation between camera image coordinate and real world coordinate without any restriction, which is the objective of camera calibration. With calibration points acquired out of experiments, 3-D real world coordinate and 2-D image coordinate are estimated using the fuzzy modeling method and the results of the experiments demonstrate the validity of the proposed method.

  • PDF

A Study on Vision-based Calibration Method for Bin Picking Robots for Semiconductor Automation (반도체 자동화를 위한 빈피킹 로봇의 비전 기반 캘리브레이션 방법에 관한 연구)

  • Kyo Mun Ku;Ki Hyun Kim;Hyo Yung Kim;Jae Hong Shim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.1
    • /
    • pp.72-77
    • /
    • 2023
  • In many manufacturing settings, including the semiconductor industry, products are completed by producing and assembling various components. Sorting out from randomly mixed parts and classification operations takes a lot of time and labor. Recently, many efforts have been made to select and assemble correct parts from mixed parts using robots. Automating the sorting and classification of randomly mixed components is difficult since various objects and the positions and attitudes of robots and cameras in 3D space need to be known. Previously, only objects in specific positions were grasped by robots or people sorting items directly. To enable robots to pick up random objects in 3D space, bin picking technology is required. To realize bin picking technology, it is essential to understand the coordinate system information between the robot, the grasping target object, and the camera. Calibration work to understand the coordinate system information between them is necessary to grasp the object recognized by the camera. It is difficult to restore the depth value of 2D images when 3D restoration is performed, which is necessary for bin picking technology. In this paper, we propose to use depth information of RGB-D camera for Z value in rotation and movement conversion used in calibration. Proceed with camera calibration for accurate coordinate system conversion of objects in 2D images, and proceed with calibration of robot and camera. We proved the effectiveness of the proposed method through accuracy evaluations for camera calibration and calibration between robots and cameras.

  • PDF

Statistical analysis for RMSE of 3D space calibration using the DLT (DLT를 이용한 3차원 공간검증시 RMSE에 대한 통계학적 분석)

  • Lee, Hyun-Seob;Kim, Ky-Hyeung
    • Korean Journal of Applied Biomechanics
    • /
    • v.13 no.1
    • /
    • pp.1-12
    • /
    • 2003
  • The purpose of this study was to design the method of 3D space calibration to reduce RMSE by statistical analysis when using the DLT algorithm and control frame. Control frame for 3D space calibration was consist of $1{\times}3{\times}2m$ and 162 contort points adhere to it. For calculate of 3D coordination used two methods about 2D coordination on image frame, 2D coordinate on each image frame and mean coordination. The methods of statistical analysis used one-way ANOVA and T-test. Significant level was ${\alpha}=.05$. The compose of methods for reduce RMSE were as follow. 1. Use the control frame composed of 24-44 control points arranged equally. 2. When photographing, locate control frame to center of image plane(image frame) o. use the lens of a few distortion. 3. When calculate of 3D coordination, use mean of 2D coordinate obtainable from all image frames.

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

Viewing Angle-Improved 3D Integral Imaging Display with Eye Tracking Sensor

  • Hong, Seokmin;Shin, Donghak;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of information and communication convergence engineering
    • /
    • v.12 no.4
    • /
    • pp.208-214
    • /
    • 2014
  • In this paper, in order to solve the problems of a narrow viewing angle and the flip effect in a three-dimensional (3D) integral imaging display, we propose an improved system by using an eye tracking method based on the Kinect sensor. In the proposed method, we introduce two types of calibration processes. First process is to perform the calibration between two cameras within Kinect sensor to collect specific 3D information. Second process is to use a space calibration for the coordinate conversion between the Kinect sensor and the coordinate system of the display panel. Our calibration processes can provide the improved performance of estimation for 3D position of the observer's eyes and generate elemental images in real-time speed based on the estimated position. To show the usefulness of the proposed method, we implement an integral imaging display system using the eye tracking process based on our calibration processes and carry out the preliminary experiments by measuring the viewing angle and flipping effect for the reconstructed 3D images. The experimental results reveal that the proposed method extended the viewing angles and removed the flipping images compared with the conventional system.

Stereo Calibration Using Support Vector Machine

  • Kim, Se-Hoon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.250-255
    • /
    • 2003
  • The position of a 3-dimensional(3D) point can be measured by using calibrated stereo camera. To obtain more accurate measurement ,more accurate camera calibration is required. There are many existing methods to calibrate camera. The simple linear methods are usually not accurate due to nonlinear lens distortion. The nonlinear methods are accurate more than linear method, but it increase computational cost and good initial guess is needed. The multi step methods need to know some camera parameters of used camera. Recent years, these explicit model based camera calibration work with the development of more precise camera models involving correction of lens distortion. But these explicit model based camera calibration have disadvantages. So implicit camera calibration methods have been derived. One of the popular implicit camera calibration method is to use neural network. In this paper, we propose implicit stereo camera calibration method for 3D reconstruction using support vector machine. SVM can learn the relationship between 3D coordinate and image coordinate, and it shows the robust property with the presence of noise and lens distortion, results of simulation are shown in section 4.

  • PDF

Extrinsic calibration using a multi-view camera (멀티뷰 카메라를 사용한 외부 카메라 보정)

  • 김기영;김세환;박종일;우운택
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.187-190
    • /
    • 2003
  • In this paper, we propose an extrinsic calibration method for a multi-view camera to get an optimal pose in 3D space. Conventional calibration algorithms do not guarantee the calibration accuracy at a mid/long distance because pixel errors increase as the distance between camera and pattern goes far. To compensate for the calibration errors, firstly, we apply the Tsai's algorithm to each lens so that we obtain initial extrinsic parameters Then, we estimate extrinsic parameters by using distance vectors obtained from structural cues of a multi-view camera. After we get the estimated extrinsic parameters of each lens, we carry out a non-linear optimization using the relationship between camera coordinate and world coordinate iteratively. The optimal camera parameters can be used in generating 3D panoramic virtual environment and supporting AR applications.

  • PDF

The Position Estimation of a Body Using 2-D Slit Light Vision Sensors (2-D 슬리트광 비젼 센서를 이용한 물체의 자세측정)

  • Kim, Jung-Kwan;Han, Myung-Chul
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.12
    • /
    • pp.133-142
    • /
    • 1999
  • We introduce the algorithms of 2-D and 3-D position estimation using 2-D vision sensors. The sensors used in this research issue red laser slit light to the body. So, it is very convenient to obtain the coordinates of corner point or edge in sensor coordinate. Since the measured points are normally not fixed in the body coordinate, the additional conditions, that corner lines or edges are straight and fixed in the body coordinate, are used to find out the position and orientation of the body. In the case of 2-D motional body, we can find the solution analytically. But in the case of 3-D motional body, linearization technique and least mean squares method are used because of hard nonlinearity.

  • PDF

Measurement of Strain of Sheet Metal (화상처리기법을 이용한 판재의 변형률 측정(I))

  • 황창원;김낙수
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 1997.03a
    • /
    • pp.207-212
    • /
    • 1997
  • In estimating the formability of sheet metal, the stereo vision system contributes the accuracy of strain of sheet metal, the convenience in measuring the strain of sheet metal, and the handiness in preparing the forming limit diagram by calculating the 3D values and strain of sheet metal. The algorithm has been developed so that the 3D-coordinate values of sheet metal could be calculated by image processing which is composed of camera calibration, and the stereo matching of images in two viewpoints. By comparing with experiments, the possibility and the convenience of algorithm has been verified, which could calculate the 3D-coordinate values of sheet metal automatically by using the preprocessing of the original image of sheet metal, which had the noise before adjusting the camera calibration and the stereo matching algorithm.

  • PDF