• Title/Summary/Keyword: 3D position coordinate

Search Result 124, Processing Time 0.03 seconds

3D Rigid Body Tracking Algorithm Using 2D Passive Marker Image (2D 패시브마커 영상을 이용한 3차원 리지드 바디 추적 알고리즘)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.587-588
    • /
    • 2022
  • In this paper, we propose a rigid body tracking method in 3D space using 2D passive marker images from multiple motion capture cameras. First, a calibration process using a chess board is performed to obtain the internal variables of individual cameras, and in the second calibration process, the triangular structure with three markers is moved so that all cameras can observe it, and then the accumulated data for each frame is calculated. Correction and update of relative position information between cameras. After that, the three-dimensional coordinates of the three markers were restored through the process of converting the coordinate system of each camera into the 3D world coordinate system, the distance between each marker was calculated, and the difference with the actual distance was compared. As a result, an error within an average of 2mm was measured.

  • PDF

Virtual Viewpoint Image Synthesis Algorithm using Multi-view Geometry (다시점 카메라 모델의 기하학적 특성을 이용한 가상시점 영상 생성 기법)

  • Kim, Tae-June;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1154-1166
    • /
    • 2009
  • In this paper, we propose algorithms for generating high quality virtual intermediate views on the baseline or out of baseline. In this proposed algorithm, depth information as well as 3D warping technique is used to generate the virtual views. The coordinate of real 3D image is calculated from the depth information and geometrical characteristics of camera and the calculated 3D coordinate is projected to the 2D plane at arbitrary camera position and results in 2D virtual view image. Through the experiments, we could show that the generated virtual view image on the baseline by the proposed algorithm has better PSNR at least by 0.5dB and we also could cover the occluded regions more efficiently for the generated virtual view image out of baseline by the proposed algorithm.

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

Accuracy Analysis of 3D Position of Close-range Photogrammetry Using Direct Linear Transformation and Self-calibration Bundle Adjustment with Additional Parameters (DLT와 부가변수에 의한 광속조정법을 활용한 근접사진측량의 3차원 위치정확도 분석)

  • Kim, Hyuk Gil;Hwang, Jin Sang;Yun, Hong Sic
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.2
    • /
    • pp.27-38
    • /
    • 2015
  • In this study, the 3D position coordinates were calculated for the targets using DLT and self-calibration bundle adjustment with additional parameters in close-range photogrammetry. And then, the accuracy of the results were analysed. For this purpose, the results of camera calibration and orientation parameters were calculated for each images by performing reference surveying using total station though the composition of experimental conditions attached numerous targets. To analyze the accuracy, 3D position coordinates were calculated for targets that has been identically selected and compared with the reference coordinates obtained from a total station. For the image coordinate measurement of the stereo images, we performed the ellipse fitting procedure for measuring the center point of the circular target. And then, the results were utilized for the image coordinate for targets. As a results from experiments, position coordinates calculated by the stereo images-based photogrammetry have resulted out the deviation of less than an average 4mm within the maximum error range of less than about 1cm. From this result, it is expected that the stereo images-based photogrammetry would be used to field of various close-range photogrammetry required for precise accuracy.

A Study on Tracking a Moving Object using Photogrammetric Techniques - Focused on a Soccer Field Model - (사진측랑기법을 이용한 이동객체 추적에 관한 연구 - 축구장 모형을 중심으로 -)

  • Bae Sang-Keun;Kim Byung-Guk;Jung Jae-Seung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.2
    • /
    • pp.217-226
    • /
    • 2006
  • Extraction and tracking objects are fundamental and important steps of the digital image processing and computer vision. Many algorithms about extracting and tracking objects have been developed. In this research, a method is suggested for tracking a moving object using a pair of CCD cameras and calculating the coordinate of the moving object. A 1/100 miniature of soccer field was made to apply the developed algorithms. After candidates were selected from the acquired images using the RGB value of a moving object (soccer ball), the object was extracted using its size (MBR size) among the candidates. And then, image coordinates of a moving object are obtained. The real-time position of a moving object is tracked in the boundary of the expected motion, which is determined by centering the moving object. The 3D position of a moving object can be obtained by conducting the relative orientation, absolute orientation, and space intersection of a pair of the CCD camera image.

Estimation of Human Height and Position using a Single Camera (단일 카메라를 이용한 보행자의 높이 및 위치 추정 기법)

  • Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.20-31
    • /
    • 2008
  • In this paper, we propose a single view-based technique for the estimation of human height and position. Conventional techniques for the estimation of 3D geometric information are based on the estimation of geometric cues such as vanishing point and vanishing line. The proposed technique, however, back-projects the image of moving object directly, and estimates the position and the height of the object in 3D space where its coordinate system is designated by a marker. Then, geometric errors are corrected by using geometric constraints provided by the marker. Unlike most of the conventional techniques, the proposed method offers a framework for simultaneous acquisition of height and position of an individual resident in the image. The accuracy and the robustness of our technique is verified on the experimental results of several real video sequences from outdoor environments.

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

The Road Traffic Sign Recognition and Automatic Positioning for Road Facility Management (도로시설물 관리를 위한 교통안전표지 인식 및 자동위치 취득 방법 연구)

  • Lee, Jun Seok;Yun, Duk Geun
    • International Journal of Highway Engineering
    • /
    • v.15 no.1
    • /
    • pp.155-161
    • /
    • 2013
  • PURPOSES: This study is to develop a road traffic sign recognition and automatic positioning for road facility management. METHODS: In this study, we installed the GPS, IMU, DMI, camera, laser sensor on the van and surveyed the car position, fore-sight image, point cloud of traffic signs. To insert automatic position of traffic sign, the automatic traffic sign recognition S/W developed and it can log the traffic sign type and approximate position, this study suggests a methodology to transform the laser point-cloud to the map coordinate system with the 3D axis rotation algorithm. RESULTS: Result show that on a clear day, traffic sign recognition ratio is 92.98%, and on cloudy day recognition ratio is 80.58%. To insert exact traffic sign position. This study examined the point difference with the road surveying results. The result RMSE is 0.227m and average is 1.51m which is the GPS positioning error. Including these error we can insert the traffic sign position within 1.51m CONCLUSIONS: As a result of this study, we can automatically survey the traffic sign type, position data of the traffic sign position error and analysis the road safety, speed limit consistency, which can be used in traffic sign DB.

A Measurement Error Correction Algorithm of Road Structure for Traveling Vehicle's Fluctuation Using VF Modeling (VF 모델링을 이용한 주행차량의 진동에 대한 도로 계측오차 보정 알고리듬)

  • Jeong, Yong-Bae;Kim, Jung-Hyun;Seo, Kyung-Ho;Kim, Tae-Hyo
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2005.11a
    • /
    • pp.190-200
    • /
    • 2005
  • In this paper, the image modelling of road's lane markings is established using view frustum(VF) modeling. This algorithm also involve the real time processing of the 3D position coordinate and the distance data from the camera to the points on the 3D world coordinate by the camera calibration. In order to reduce their measurement error, an useful algorithm for which analyze the geometric variations clue to traveling vehicle's fluctuation using VF model is proposed. In experiments, without correction, for instance, the $0.4^{\circ}$ of pitching rotation gives the error of $0.4^{\sim}0.6m$ at the distance of 10m, but the more far distance cause exponentially the more error. We confirmed that this algorithm can be reduced less than 0.1m of error at the same condition.

  • PDF