• Title/Summary/Keyword: 3-D coordinate calibration

Search Result 69, Processing Time 0.025 seconds

Positioning Accuracy Improvement of Robots by Link Parameter Calibration (링크인자 보정에 의한 로보트 위치 정밀도 개선)

  • Cho, Eui-Chung;Ha, Young-Kyun;Lee, Sang-Jo;Park, Young-Pil
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.6 no.3
    • /
    • pp.32-45
    • /
    • 1989
  • The positioning accuracy of robots depends upon a forward kinematics which relates the joint variables to the orientation and position of the robot extremity in the absolute coordinate system. The relationship between two connective joint coordi- nates of a robot, which is the basis of the kinematics, is defined by 4 Denavit-Hartenberg parameters. But manufacturing errors in machining and assembly process of robots lead to disctrepancies between the design parameters and the physical structure. Thus, improving the positioning accuracy of robots reguires the identification of the actual link parameters of each robot. In this study, the least-squares method is used to calibrate the link parameters and off-line parameter calibration software is developed. Computer simulation is done to study the dependence of the calibration performance upon the DOF of the robot and number of acquired data set used in the least-squares method. 3 DOF Robot/Controller and specially designed 3D coordinate measurer is made and experiment is carried out to verify the theoretical and computational analysis.

  • PDF

3D Rigid Body Tracking Algorithm Using 2D Passive Marker Image (2D 패시브마커 영상을 이용한 3차원 리지드 바디 추적 알고리즘)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.587-588
    • /
    • 2022
  • In this paper, we propose a rigid body tracking method in 3D space using 2D passive marker images from multiple motion capture cameras. First, a calibration process using a chess board is performed to obtain the internal variables of individual cameras, and in the second calibration process, the triangular structure with three markers is moved so that all cameras can observe it, and then the accumulated data for each frame is calculated. Correction and update of relative position information between cameras. After that, the three-dimensional coordinates of the three markers were restored through the process of converting the coordinate system of each camera into the 3D world coordinate system, the distance between each marker was calculated, and the difference with the actual distance was compared. As a result, an error within an average of 2mm was measured.

  • PDF

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

Volumetric Error Calibration of NC Machine Tools using a Hole-Plate Artifact (Hole-Plate를 이용한 NC공작기계의 공간 오차 측정 및 분석)

  • Park, Dal-Geun;Lee, Enug-Suk
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.15 no.1
    • /
    • pp.1-7
    • /
    • 2006
  • A method of the volumetric error measurement and calibration of NC machine tools is studied using an artifact method. In this study, a hole-pate is designed and machined using stainless steel. We tested and applied the hole-plate artifact in a commercial CMM(Coordinate Measuring Machine), after calibration of the hole-plate using a precise CMM. It has been shown that not only the measurement of geometric error components but also the 2D length error calculation in a working volume is available using the hole-pate artifact method. The results of study can also be used in NC machine with touch probe as the same method in CMM.

High precision 3-dimensional object measurement using slit type of laser projector (슬리트형 레이저 투광기를 이용한 고정밀 3차원 물체계측)

  • Kim, Tae-Hyo;Park, Young-Seok;Lee, Chuy-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.6
    • /
    • pp.613-618
    • /
    • 1997
  • In this paper, we designed a line CCD camera for a flying image, which is composed of a line CCD sensor(2048 cells) and a rotating mirror, and investigated its optical properties. We also made the 3-D image from the flying image which is made of 2-D image being juxtaposed to 1-D images obtained by the camera, and performed the calibration to acquire high precision 3-D data. As a result, we obtained the 3-D measurement system using the slit type of laser projector is available to measure the high precision shape of objects.

  • PDF

Camera Calibration using the TSK fuzzy system (TSK 퍼지 시스템을 이용한 카메라 켈리브레이션)

  • Lee Hee-Sung;Hong Sung-Jun;Oh Kyung-Sae;Kim Eun-Tai
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2006.05a
    • /
    • pp.56-58
    • /
    • 2006
  • Camera calibration in machine vision is the process of determining the intrinsic cameara parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

  • PDF

Coordinate Determination for Texture Mapping using Camera Calibration Method (카메라 보정을 이용한 텍스쳐 좌표 결정에 관한 연구)

  • Jeong K. W.;Lee Y.Y.;Ha S.;Park S.H.;Kim J. J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.397-405
    • /
    • 2004
  • Texture mapping is the process of covering 3D models with texture images in order to increase the visual realism of the models. For proper mapping the coordinates of texture images need to coincide with those of the 3D models. When projective images from the camera are used as texture images, the texture image coordinates are defined by a camera calibration method. The texture image coordinates are determined by the relation between the coordinate systems of the camera image and the 3D object. With the projective camera images, the distortion effect caused by the camera lenses should be compensated in order to get accurate texture coordinates. The distortion effect problem has been dealt with iterative methods, where the camera calibration coefficients are computed first without considering the distortion effect and then modified properly. The methods not only cause to change the position of the camera perspective line in the image plane, but also require more control points. In this paper, a new iterative method is suggested for reducing the error by fixing the principal points in the image plane. The method considers the image distortion effect independently and fixes the values of correction coefficients, with which the distortion coefficients can be computed with fewer control points. It is shown that the camera distortion effects are compensated with fewer numbers of control points than the previous methods and the projective texture mapping results in more realistic image.