• Title/Summary/Keyword: 3-D coordinate calibration

Search Result 69, Processing Time 0.022 seconds

Touch-Trigger Probe Error Compensation in a Machining Center (공작기계용 접촉식 측정 프로브의 프로빙 오차 보상에 관한 연구)

  • Lee, Chan-Ho;Lee, Eung-Suk
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.6
    • /
    • pp.661-667
    • /
    • 2011
  • Kinematic contact trigger probes are widely used for feature inspection and measurement on coordinate measurement machines (CMMs) and computer numerically controlled (CNC) machine tools. Recently, the probing accuracy has become one of the most important factors in the improvement of product quality, as the accuracy of such machining centers and measuring machines is increasing. Although high-accuracy probes using strain gauge can achieve this requirement, in this paper we study the universal economic kinematic contact probe to prove its probing mechanism and errors, and to try to make the best use of its performance. Stylus-ball-radius and center-alignment errors are proved, and the probing error mechanism on the 3D measuring coordinate is analyzed using numerical expressions. Macro algorithms are developed for the compensation of these errors, and actual tests and verifications are performed with a kinematic contact trigger probe and reference sphere on a CNC machine tool.

Determination of 3D Object Coordinates from Overlapping Omni-directional Images Acquired by a Mobile Mapping System (모바일매핑시스템으로 취득한 중첩 전방위 영상으로부터 3차원 객체좌표의 결정)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.305-315
    • /
    • 2010
  • This research aims to develop a method to determine the 3D coordinates of an object point from overlapping omni-directional images acquired by a ground mobile mapping system and assess their accuracies. In the proposed method, we first define an individual coordinate system on each sensor and the object space and determine the geometric relationships between the systems. Based on these systems and their relationships, we derive a straight line of the corresponding object point candidates for a point of an omni-directional image, and determine the 3D coordinates of the object point by intersecting a pair of straight lines derived from a pair of matched points. We have compared the object coordinates determined through the proposed method with those measured by GPS and a total station for the accuracy assessment and analysis. According to the experimental results, with the appropriate length of baseline and mutual positions between cameras and objects, we can determine the relative coordinates of the object point with the accuracy of several centimeters. The accuracy of the absolute coordinates is ranged from several centimeters to 1 m due to systematic errors. In the future, we plan to improve the accuracy of absolute coordinates by determining more precisely the relationship between the camera and GPS/INS coordinates and performing the calibration of the omni-directional camera

Estimation of Human Height and Position using a Single Camera (단일 카메라를 이용한 보행자의 높이 및 위치 추정 기법)

  • Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.20-31
    • /
    • 2008
  • In this paper, we propose a single view-based technique for the estimation of human height and position. Conventional techniques for the estimation of 3D geometric information are based on the estimation of geometric cues such as vanishing point and vanishing line. The proposed technique, however, back-projects the image of moving object directly, and estimates the position and the height of the object in 3D space where its coordinate system is designated by a marker. Then, geometric errors are corrected by using geometric constraints provided by the marker. Unlike most of the conventional techniques, the proposed method offers a framework for simultaneous acquisition of height and position of an individual resident in the image. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences from outdoor environments.

Dimension Measurement for Large-scale Moving Objects Using Stereo Camera with 2-DOF Mechanism (스테레오 카메라와 2축 회전기구를 이용한 대형 이동물체의 치수측정)

  • Cuong, Nguyen Huu;Lee, Byung Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.6
    • /
    • pp.543-551
    • /
    • 2015
  • In this study, a novel method for dimension measurement of large-scale moving objects using stereo camera with 2-degree of freedom (2-DOF) mechanism is presented. The proposed method utilizes both the advantages of stereo vision technique and the enlarged visibility range of camera due to 2-DOF rotary mechanism in measuring large-scale moving objects. The measurement system employs a stereo camera combined with a 2-DOF rotary mechanism that allows capturing separate corners of the measured object. The measuring algorithm consists of two main stages. First, three-dimensional (3-D) positions of the corners of the measured object are determined based on stereo vision algorithms. Then, using the rotary angles of the 2-DOF mechanism the dimensions of the measured object are calculated via coordinate transformation. The proposed system can measure the dimensions of moving objects with relatively slow and steady speed. We showed that the proposed system guarantees high measuring accuracy with some experiments.

Development of an Image Processing Algorithm for Paprika Recognition and Coordinate Information Acquisition using Stereo Vision (스테레오 영상을 이용한 파프리카 인식 및 좌표 정보 획득 영상처리 알고리즘 개발)

  • Hwa, Ji-Ho;Song, Eui-Han;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Bio-Environment Control
    • /
    • v.24 no.3
    • /
    • pp.210-216
    • /
    • 2015
  • Purpose of this study was a development of an image processing algorithm to recognize paprika and acquire it's 3D coordinates from stereo images to precisely control an end-effector of a paprika auto harvester. First, H and S threshold was set using HSI histogram analyze for extracting ROI(region of interest) from raw paprika cultivation images. Next, fundamental matrix of a stereo camera system was calculated to process matching between extracted ROI of corresponding images. Epipolar lines were acquired using F matrix, and $11{\times}11$ mask was used to compare pixels on the line. Distance between extracted corresponding points were calibrated using 3D coordinates of a calibration board. Non linear regression analyze was used to prove relation between each pixel disparity of corresponding points and depth(Z). Finally, the program could calculate horizontal(X), vertical(Y) directional coordinates using stereo camera's geometry. Horizontal directional coordinate's average error was 5.3mm, vertical was 18.8mm, depth was 5.4mm. Most of the error was occurred at 400~450mm of depth and distorted regions of image.

A Study on the Development of a Specialized Prototype End-Effector for RDSs(Robotic Drilling Systems) (RDS(Robotic Drilling System) 구축을 위한 전용 End-Effector Prototype 개발에 관한 연구)

  • Kim, Tae-Hwa;Kwon, Soon-Jae
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.12 no.6
    • /
    • pp.132-141
    • /
    • 2013
  • Robotic Drilling Systems(RDSs) set the standard for the factory automation systems in aerospace manufacturing. With the benefits of cost effective drilling and predictive maintenance, RDSs can provide greater flexibility in the manufacturing process. The system can be easily adopted to manage very complex and time-consuming processes, such as automated fastening hole drilling processes of large aircraft sections, where it would be difficult accomplished by workers following teaching or conventional guided methods. However, in order to build an RDS based on a CAD model, the precise calibration of the Tool Center Point(TCP) must be performed in order to define the relationships between the fastening-hole target and the End Effector(EEF). Based on the kinematics principle, the robot manipulator requires a new method to correct the 3D errors between the CAD model of the reference coordinate system and the actual measurements. The system can be called as a successful system if following conditions can be met; a. seamless integration of the industrial robot controller and the IO Level communication, b. performing pre-defined drilling procedures automatically. This study focuses on implementing a new technology called iGPS into the fastening-hole-drilling process, which is a critical process in aircraft manufacturing. The proposed system exhibits better than 100-micron 3D accuracy under the predefined working space. Based on the proposed EEF fastening-hole machining process, the corresponding processes and programs are developed, and its feasibility is studied.

Projective Reconstruction Method for 3D modeling from Un-calibrated Image Sequence (비교정 영상 시퀀스로부터 3차원 모델링을 위한 프로젝티브 재구성 방법)

  • Hong Hyun-Ki;Jung Yoon-Yong;Hwang Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.113-120
    • /
    • 2005
  • 3D reconstruction of a scene structure from un-calibrated image sequences has been long one of the central problems in computer vision. For 3D reconstruction in Euclidean space, projective reconstruction, which is classified into the merging method and the factorization, is needed as a preceding step. By calculating all camera projection matrices and structures at the same time, the factorization method suffers less from dia and error accumulation than the merging. However, the factorization is hard to analyze precisely long sequences because it is based on the assumption that all correspondences must remain in all views from the first frame to the last. This paper presents a new projective reconstruction method for recovery of 3D structure over long sequences. We break a full sequence into sub-sequences based on a quantitative measure considering the number of matching points between frames, the homography error, and the distribution of matching points on the frame. All of the projective reconstructions of sub-sequences are registered into the same coordinate frame for a complete description of the scene. no experimental results showed that the proposed method can recover more precise 3D structure than the merging method.

A Study on the Estimation of Multi-Object Social Distancing Using Stereo Vision and AlphaPose (Stereo Vision과 AlphaPose를 이용한 다중 객체 거리 추정 방법에 관한 연구)

  • Lee, Ju-Min;Bae, Hyeon-Jae;Jang, Gyu-Jin;Kim, Jin-Pyeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.279-286
    • /
    • 2021
  • Recently, We are carrying out a policy of physical distancing of at least 1m from each other to prevent the spreading of COVID-19 disease in public places. In this paper, we propose a method for measuring distances between people in real time and an automation system that recognizes objects that are within 1 meter of each other from stereo images acquired by drones or CCTVs according to the estimated distance. A problem with existing methods used to estimate distances between multiple objects is that they do not obtain three-dimensional information of objects using only one CCTV. his is because three-dimensional information is necessary to measure distances between people when they are right next to each other or overlap in two dimensional image. Furthermore, they use only the Bounding Box information to obtain the exact coordinates of human existence. Therefore, in this paper, to obtain the exact two-dimensional coordinate value in which a person exists, we extract a person's key point to detect the location, convert it to a three-dimensional coordinate value using Stereo Vision and Camera Calibration, and estimate the Euclidean distance between people. As a result of performing an experiment for estimating the accuracy of 3D coordinates and the distance between objects (persons), the average error within 0.098m was shown in the estimation of the distance between multiple people within 1m.

Georeferencing of Indoor Omni-Directional Images Acquired by a Rotating Line Camera (회전식 라인 카메라로 획득한 실내 전방위 영상의 지오레퍼런싱)

  • Oh, So-Jung;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.2
    • /
    • pp.211-221
    • /
    • 2012
  • To utilize omni-directional images acquired by a rotating line camera for indoor spatial information services, we should register precisely the images with respect to an indoor coordinate system. In this study, we thus develop a georeferencing method to estimate the exterior orientation parameters of an omni-directional image - the position and attitude of the camera at the acquisition time. First, we derive the collinearity equations for the omni-directional image by geometrically modeling the rotating line camera. We then estimate the exterior orientation parameters using the collinearity equations with indoor control points. The experimental results from the application to real data indicate that the exterior orientation parameters is estimated with the precision of 1.4 mm and $0.05^{\circ}$ for the position and attitude, respectively. The residuals are within 3 and 10 pixels in horizontal and vertical directions, respectively. Particularly, the residuals in the vertical direction retain systematic errors mainly due to the lens distortion, which should be eliminated through a camera calibration process. Using omni-directional images georeferenced precisely with the proposed method, we can generate high resolution indoor 3D models and sophisticated augmented reality services based on the models.