• Title/Summary/Keyword: 3-D coordinate calibration

Search Result 69, Processing Time 0.027 seconds

The performance improvement of the volumetric interferometer with multi-CCDs (다중 CCD를 이용한 부피 간섭계의 성능 개선)

  • 주지영;이혁교;김승우
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.268-271
    • /
    • 2003
  • The Volumetric Interferometer using two spherical wavefronts emitted from the ends of two single mode fibers has been suggested to measure 3-dimensional absolute coordinates. In this paper, we try to improve the performance of the volumetric interferometer using multi-CCDs. We get coordinates matching matrixes between CCDs and can obtain more information in the space with multi-CCDs. Also we find out the best arrangement of multi-CCDs by computer simulations. In the simulation we can know that it will be better to increase the distance between CCDs to improve performance. For the performance test, we do a repeatability test, a comparison test with 2-D stage and the self-calibration using artifact.

  • PDF

Development of a Remote Object's 3D Position Measuring System (원격지 물체의 삼차원 위치 측정시스템의 개발)

  • Park, Kang
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.8
    • /
    • pp.60-70
    • /
    • 2000
  • In this paper a 3D position measuring device that finds the 3D position of an arbitarily placed object using a camersa system is introduced. The camera system consists of three stepping motors and a CCD camera and a laser. The viewing direction of the camera is controlled by two stepping motors (pan and tilt motors) and the direction of a laser is also controlled by a stepping motors(laser motor). If an object in a remote place is selected from a live video image the x,y,z coordinates of the object with respect to the reference coordinate system can be obtained by calculating the distance from the camera to the object using a structured light scheme and by obtaining the orientation of the camera that is controlled by two stepping motors. The angles o f stepping motors are controlled by a SGI O2 workstation through a parallel port. The mathematical model of the camera and the distance measuring system are calibrated to calculate an accurate position of the object. This 3D position measuring device can be used to acquire information that is necessary to monitor a remote place.

  • PDF

Mixed Reality Extension System Using Beam Projectors : Beyond the Sight (빔 프로젝터를 이용한 혼합현실 확장 시스템 : Beyond the Sight)

  • Kim, Jongyong;Song, J.H;Park, J.H.;Nam, J.;Yoon, Seung-Hyun;Park, Sanghun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.65-73
    • /
    • 2019
  • Recently commercial mixed-reality devices have be launched and a variety of mixed-reality content has produced, but narrow field of view, which appear to be hardware technical limitations, are mentioned as an important issue for hindering immersion and limiting the scope of use. We propose a new innovative system that cooperate multiple beam projectors and a number of mixed reality devices. Using this technology, users can maximize immersion and minimize frustration of narrow viewing angles through 3D object rendering on background of large 2D screens. This system, named BtS (Beyond the Sight), is implemented on a client-server basis and includes the ability to calibrate between devices, share spatial coordinate systems, and synchronize real-time renderings as core modules. In this paper, each configuration module is described in detail and the possibility of its performance and application is shown through the introduction of mixed reality content case created using BtS system.

Correction of Image Distortion and Coordinate Calibration of the x-ray three dimensional imaging system (X선 3차원 영상 시스템에서의 영상 왜곡 및 영상 좌표계 보정)

  • 노영준;김재완;조형석;전형조;김형철;주효남
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.413-413
    • /
    • 2000
  • In this paper, we propose a series of calibrations f3r the x-ray three dimensional imaging system. In the developed x-ray system, a three dimensional inner and outer shape of an object can be reconstructed out of two dimensional transmitted x-ray image set, which are acquired by projecting x-ray to the object from different views. To achieve this, a reconstruction algorithm which estimates and updates the three dimensional volume from x-ray images is developed. The algorithm is named as uniform and simultaneous algebraic reconstruction technique(USART) which is an iterative method estimating a 3D volume based on its projected images. In this method, it is assumed that the imaging conditions that are the relative positions between the x-ray sources, object and the image planes are blown. Practically it is not easy to know the three dimensional coordinate of the components of the system, since the x-ray is not visible and the image distortions are present due to the optical components in the system. In this paper, methods of correcting image distortions are present firstly. Then the coordinates of the x-ray systems are calibrated from the x-ray images of the grid pattern. Some experimental results on these calibrations are present and discussed.

  • PDF

Image Calibration Techniques for Removing Cupping and Ring Artifacts in X-ray Micro-CT Images (X-ray micro-CT 이미지 내 패임 및 동심원상 화상결함 제거를 위한 이미지 보정 기법)

  • Jung, Yeon-Jong;Yun, Tae-Sup;Kim, Kwang-Yeom;Choo, Jin-Hyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.27 no.11
    • /
    • pp.93-101
    • /
    • 2011
  • High quality X-ray computed microtomography (micro-CT) imaging of internal microstructures and pore space in geomaterials is often hampered by some inherent noises embedded in the images. In this paper, we introduce image calibration techniques for removing the most common noises in X-ray micro-CT, cupping (brightness difference between the periphery and central regions) and ring artifacts (consecutive concentric circles emanating from the origin). The artifacts removal sequentially applies coordinate transformation, normalization, and low-pass filtering in 2D Fourier spectrum to raw CT-images. The applicability and performance of the techniques are showcased by describing extraction of 3D pore structures from micro-CT images of porous basalt using artifacts reductions, binarization, and volume stacking. Comparisions between calibrated and raw images indicate that the artifacts removal allows us to avoid the overestimation of porosity of imaged materials, and proper calibration of the artifacts plays a crucial role in using X-ray CT for geomaterials.

Precision Evaluation of Three-dimensional Feature Points Measurement by Binocular Vision

  • Xu, Guan;Li, Xiaotao;Su, Jian;Pan, Hongda;Tian, Guangdong
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.1
    • /
    • pp.30-37
    • /
    • 2011
  • Binocular-pair images obtained from two cameras can be used to calculate the three-dimensional (3D) world coordinate of a feature point. However, to apply this method, measurement accuracy of binocular vision depends on some structure factors. This paper presents an experimental study of measurement distance, baseline distance, and baseline direction. Their effects on camera reconstruction accuracy are investigated. The testing set for the binocular model consists of a series of feature points in stereo-pair images and corresponding 3D world coordinates. This paper discusses a method to increase the baseline distance of two cameras for enhancing the accuracy of a binocular vision system. Moreover, there is an inflexion point of the value and distribution of measurement errors when the baseline distance is increased. The accuracy benefit from increasing the baseline distance is not obvious, since the baseline distance exceeds 1000 mm in this experiment. Furthermore, it is observed that the direction errors deduced from the set-up are lower when the main measurement direction is similar to the baseline direction.

3D Particle Image Detection by Using Color Encoded Illumination System

  • Kawahashi M.;Hirahara H.
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2001.12a
    • /
    • pp.100-107
    • /
    • 2001
  • A simple new technique of particle depth position measurement, which can be applied for three-dimensional velocity measurement of fluid flows, is proposed. Two color illumination system that intensity is encoded as a function of z-coordinate is introduced. A calibration procedure is described and a profile of small sphere is detected by using the present method as preliminary test. Then, this method is applied to three-dimensional velocity field measurement of simple flow fields seeded with tracer particles. The motion of the particles is recorded by color 3CCD camera. The particle position in the image plane is read directly from the recorded image and the depth of each particle is measured by calculation of the intensity ratio of encoded two color illumination. Therefore three-dimensional velocity components are reconstructed. Although the result includes to some extent error, the feasibility of the present technique for three-dimensional velocity measurement was confirmed.

  • PDF

A Technique to Efficiently Place Sensors for Three-Dimensional Robotic Manipulation : For the Case of Stereo Cameras (로봇의 3차원 작업을 위한 효율적 센서위치의 결정기법 : 스테레오 카메라를 중심으로)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.8 no.1
    • /
    • pp.80-88
    • /
    • 1999
  • This paper deals with the position determination problem of stereo camera systems used as a sensor for 3D robotic manipulation. Stereo cameras having parallel rays of sight and been set up on the same baseline are assumed. The distance between the sensor and the space measured is determined so as to get insensitive parameters to the uncertainty of control points used for calibration and to satisfy the error condition set by considering the repeatability of the robot. The baseline width is determined by minimizing the mutual effect of 3D positional error and stereo image coordinate error. Unlike existing techniques, the technique proposed here is developed without complicated constraints and modelling process of the object to be observed. Thus, the technique of this paper is more general and its effectiveness is proved by simulation.

  • PDF

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

3D Image Construction Using Color and Depth Cameras (색상과 깊이 카메라를 이용한 3차원 영상 구성)

  • Jung, Ha-Hyoung;Kim, Tae-Yeon;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a method for 3D image construction using the hybrid (color and depth) camera system, in which the drawbacks of each camera can be compensated for. Prior to an image generation, intrinsic parameters and extrinsic parameters of each camera are extracted through experiments. The geometry between two cameras is established with theses parameters so as to match the color and depth images. After the preprocessing step, the relation between depth information and distance is derived experimentally as a simple linear function, and 3D image is constructed by coordinate transformations of the matched images. The present scheme has been realized using the Microsoft hybrid camera system named Kinect, and experimental results of 3D image and the distance measurements are given to evaluate the method.