• Title/Summary/Keyword: 3D space calibration

Search Result 60, Processing Time 0.024 seconds

DLT를 이용한 3차원 공간검증시 RMSE에 대한 통계학적 분석 (Statistical analysis for RMSE of 3D space calibration using the DLT)

  • 이현섭;김기형
    • 한국운동역학회지
    • /
    • 제13권1호
    • /
    • pp.1-12
    • /
    • 2003
  • The purpose of this study was to design the method of 3D space calibration to reduce RMSE by statistical analysis when using the DLT algorithm and control frame. Control frame for 3D space calibration was consist of $1{\times}3{\times}2m$ and 162 contort points adhere to it. For calculate of 3D coordination used two methods about 2D coordination on image frame, 2D coordinate on each image frame and mean coordination. The methods of statistical analysis used one-way ANOVA and T-test. Significant level was ${\alpha}=.05$. The compose of methods for reduce RMSE were as follow. 1. Use the control frame composed of 24-44 control points arranged equally. 2. When photographing, locate control frame to center of image plane(image frame) o. use the lens of a few distortion. 3. When calculate of 3D coordination, use mean of 2D coordinate obtainable from all image frames.

Viewing Angle-Improved 3D Integral Imaging Display with Eye Tracking Sensor

  • Hong, Seokmin;Shin, Donghak;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of information and communication convergence engineering
    • /
    • 제12권4호
    • /
    • pp.208-214
    • /
    • 2014
  • In this paper, in order to solve the problems of a narrow viewing angle and the flip effect in a three-dimensional (3D) integral imaging display, we propose an improved system by using an eye tracking method based on the Kinect sensor. In the proposed method, we introduce two types of calibration processes. First process is to perform the calibration between two cameras within Kinect sensor to collect specific 3D information. Second process is to use a space calibration for the coordinate conversion between the Kinect sensor and the coordinate system of the display panel. Our calibration processes can provide the improved performance of estimation for 3D position of the observer's eyes and generate elemental images in real-time speed based on the estimated position. To show the usefulness of the proposed method, we implement an integral imaging display system using the eye tracking process based on our calibration processes and carry out the preliminary experiments by measuring the viewing angle and flipping effect for the reconstructed 3D images. The experimental results reveal that the proposed method extended the viewing angles and removed the flipping images compared with the conventional system.

반도체 자동화를 위한 빈피킹 로봇의 비전 기반 캘리브레이션 방법에 관한 연구 (A Study on Vision-based Calibration Method for Bin Picking Robots for Semiconductor Automation)

  • 구교문;김기현;김효영;심재홍
    • 반도체디스플레이기술학회지
    • /
    • 제22권1호
    • /
    • pp.72-77
    • /
    • 2023
  • In many manufacturing settings, including the semiconductor industry, products are completed by producing and assembling various components. Sorting out from randomly mixed parts and classification operations takes a lot of time and labor. Recently, many efforts have been made to select and assemble correct parts from mixed parts using robots. Automating the sorting and classification of randomly mixed components is difficult since various objects and the positions and attitudes of robots and cameras in 3D space need to be known. Previously, only objects in specific positions were grasped by robots or people sorting items directly. To enable robots to pick up random objects in 3D space, bin picking technology is required. To realize bin picking technology, it is essential to understand the coordinate system information between the robot, the grasping target object, and the camera. Calibration work to understand the coordinate system information between them is necessary to grasp the object recognized by the camera. It is difficult to restore the depth value of 2D images when 3D restoration is performed, which is necessary for bin picking technology. In this paper, we propose to use depth information of RGB-D camera for Z value in rotation and movement conversion used in calibration. Proceed with camera calibration for accurate coordinate system conversion of objects in 2D images, and proceed with calibration of robot and camera. We proved the effectiveness of the proposed method through accuracy evaluations for camera calibration and calibration between robots and cameras.

  • PDF

Calibration of Inertial Measurement Units Using Pendulum Motion

  • Choi, Kee-Young;Jang, Se-Ah;Kim, Yong-Ho
    • International Journal of Aeronautical and Space Sciences
    • /
    • 제11권3호
    • /
    • pp.234-239
    • /
    • 2010
  • The utilization of micro-electro-mechanical system (MEMS) gyros and accelerometers in low-level inertial measurement unit (IMU) influences cost effectiveness in a positive way under the condition that device error characteristics are fully calibrated. The conventional calibration process utilizes a rate table; however, this paper proposes a new method for achieving reference calibration data from the natural motion of pendulum to which the IMU undergoing calibration is attached. This concept was validated with experimental data. The pendulum angle measurements correlate extremely well with the solutions acquired from the pendulum equation of motion. The calibration data were computed using the regression method. The whole process was validated by comparing the measurement from the 6 sensor components with the measurements reconstructed using the identified calibration data.

구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정 (Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object)

  • 박순용;최성인
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제3권8호
    • /
    • pp.309-314
    • /
    • 2014
  • 물체의 360도 방향에서 다수의 RGB-D(RGB-Depth) 카메라를 이용하여 깊이영상을 획득하고 3차원 모델을 생성하기 위해서는 RGB-D 카메라 간의 3차원 변환관계를 구하여야 한다. 본 논문에서는 구형 물체를 이용하여 4대의 RGB-D 카메라 사이의 변환관계를 간편하게 구할 수 있는 시점보정(view calibration) 방법을 제안한다. 기존의 시점보정 방법들은 평면 형태의 체크보드나 코드화된 패턴을 가진 3차원 물체를 주로 사용함으로써 패턴의 특징이나 코드를 추출하고 정합하는 작업에 상당한 시간이 걸린다. 본 논문에서는 구형 물체의 깊이영상과 사진영상을 동시에 사용하여 간편하게 시점을 보정할 수 있는 방법을 제안한다. 우선 하나의 구를 모델링 공간에서 연속적으로 움직이는 동안 모든 RGB-D 카메라에서 구의 깊이영상과 사진영상을 동시에 획득한다. 다음으로 각 RGB-D 카메라의 좌표계에서 획득한 구의 3차원 중심좌표를 월드좌표계에서 일치되도록 각 카메라의 외부변수를 보정한다.

멀티뷰 카메라를 사용한 외부 카메라 보정 (Extrinsic calibration using a multi-view camera)

  • 김기영;김세환;박종일;우운택
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 신호처리소사이어티 추계학술대회 논문집
    • /
    • pp.187-190
    • /
    • 2003
  • In this paper, we propose an extrinsic calibration method for a multi-view camera to get an optimal pose in 3D space. Conventional calibration algorithms do not guarantee the calibration accuracy at a mid/long distance because pixel errors increase as the distance between camera and pattern goes far. To compensate for the calibration errors, firstly, we apply the Tsai's algorithm to each lens so that we obtain initial extrinsic parameters Then, we estimate extrinsic parameters by using distance vectors obtained from structural cues of a multi-view camera. After we get the estimated extrinsic parameters of each lens, we carry out a non-linear optimization using the relationship between camera coordinate and world coordinate iteratively. The optimal camera parameters can be used in generating 3D panoramic virtual environment and supporting AR applications.

  • PDF

레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발 (Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting)

  • 임동혁;황헌
    • Journal of Biosystems Engineering
    • /
    • 제24권2호
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구 (A Study on the Determination of 3-D Object's Position Based on Computer Vision Method)

  • 김경석
    • 한국생산제조학회지
    • /
    • 제8권6호
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

비교정 영상으로부터 왜곡을 제거한 3 차원 재구성방법 (3D reconstruction method without projective distortion from un-calibrated images)

  • 김형률;김호철;오장석;구자민;김민기
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2005년도 추계종합학술대회
    • /
    • pp.391-394
    • /
    • 2005
  • In this paper, we present an approach that is able to reconstruct 3 dimensional metric models from un-calibrated images acquired by a freely moved camera system. If nothing is known of the calibration of either camera, nor the arrangement of one camera which respect to the other, then the projective reconstruction will have projective distortion which expressed by an arbitrary projective transformation. The distortion on the reconstruction is removed from projection to metric through self-calibration. The self-calibration requires no information about the camera matrices, or information about the scene geometry. Self-calibration is the process of determining internal camera parameters directly from multiply un-calibrated images. Self-calibration avoids the onerous task of calibrating cameras which needs to use special calibration objects. The root of the method is setting a uniquely fixed conic(absolute quadric) in 3D space. And it can make possible to figure out some way from the images. Once absolute quadric is identified, the metric geometry can be computed. We compared reconstruction image from calibrated images with the result by self-calibration method.

  • PDF

소수 데이터의 신경망 학습에 의한 카메라 보정 (Camera Calibration Using Neural Network with a Small Amount of Data)

  • 도용태
    • 센서학회지
    • /
    • 제28권3호
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.