• Title/Summary/Keyword: 3D world coordinate

Search Result 55, Processing Time 0.023 seconds

Conversion Method of 3D Point Cloud to Depth Image and Its Hardware Implementation (3차원 점군데이터의 깊이 영상 변환 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Jo, Gippeum;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2443-2450
    • /
    • 2014
  • In the motion recognition system using depth image, the depth image is converted to the real world formed 3D point cloud data for efficient algorithm apply. And then, output depth image is converted by the projective world after algorithm apply. However, when coordinate conversion, rounding error and data loss by applied algorithm are occurred. In this paper, when convert 3D point cloud data to depth image, we proposed efficient conversion method and its hardware implementation without rounding error and data loss according image size change. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF

A 3D Terrain Reconstruction System using Navigation Information and Realtime-Updated Terrain Data (항법정보와 실시간 업데이트 지형 데이터를 사용한 3D 지형 재구축 시스템)

  • Baek, In-Sun;Um, Ky-Hyun;Cho, Kyung-Eun
    • Journal of Korea Game Society
    • /
    • v.10 no.6
    • /
    • pp.157-168
    • /
    • 2010
  • A terrain is an essential element for constructing a virtual world in which game characters and objects make various interactions with one another. Creating a terrain requires a great deal of time and repetitive editing processes. This paper presents a 3D terrain reconstruction system to create 3D terrain in virtual space based on real terrain data. In this system, it converts the coordinate system of the height maps which are generated from a stereo camera and a laser scanner from global GPS into 3D world using the x and z axis vectors of the global GPS coordinate system. It calculates the movement vectors and the rotation matrices frame by frame. Terrain meshes are dynamically generated and rendered in the virtual areas which are represented in an undirected graph. The rendering meshes are exactly created and updated by correcting terrain data errors. In our experiments, the FPS of the system was regularly checked until the terrain was reconstructed by our system, and the visualization quality of the terrain was reviewed. As a result, our system shows that it has 3 times higher FPS than other terrain management systems with Quadtree for small area, improves 40% than others for large area. The visualization of terrain data maintains the same shape as the contour of real terrain. This system could be used for the terrain system of realtime 3D games to generate terrain on real time, and for the terrain design work of CG Movies.

Golf Green Slope Estimation Using a Cross Laser Structured Light System and an Accelerometer

  • Pham, Duy Duong;Dang, Quoc Khanh;Suh, Young Soo
    • Journal of Electrical Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.508-518
    • /
    • 2016
  • In this paper, we propose a method combining an accelerometer with a cross structured light system to estimate the golf green slope. The cross-line laser provides two laser planes whose functions are computed with respect to the camera coordinate frame using a least square optimization. By capturing the projections of the cross-line laser on the golf slope in a static pose using a camera, two 3D curves’ functions are approximated as high order polynomials corresponding to the camera coordinate frame. Curves’ functions are then expressed in the world coordinate frame utilizing a rotation matrix that is estimated based on the accelerometer’s output. The curves provide some important information of the green such as the height and the slope’s angle. The curves estimation accuracy is verified via some experiments which use OptiTrack camera system as a ground-truth reference.

New Method of Visual Servoing using an Uncalibrated Camera and a Calibrated Robot

  • Morita, Masahiko;Shigeru, Uchikado;Yasuhiro, Osa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.41.4-41
    • /
    • 2002
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. Here we consider two coordinate systems, the world coordinate system and the camera coordinate one and we use a pinhole camera model as the camera one. First of all, the essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. And these plays an important role in designing visual servoing in the later chapters. Statement of the problem is giver. Provided two a priori...

  • PDF

Integrating History of Mathematics in Teaching Cartesian Coordinate Plane: A Lesson Study

  • MENDOZA, Jay-R M.;ALEGARIO, Joan Marie T.;BLANCO, Miguel G.;De TORRES, Reynold;IGAY, Roselyn B.;ELIPANE, Levi E.
    • Research in Mathematical Education
    • /
    • v.20 no.1
    • /
    • pp.39-49
    • /
    • 2016
  • The History of Mathematics (HOM) was integrated in teaching the Cartesian Coordinate Plane (CCP) to Grade Seven learners of Moonwalk National High School using Lesson Study. After the lesson was taught, there were three valuable issues emerged: (1) HOM is a Springboard and/or a Medium of Motivation in Teaching CCP; (2) The History of CCP Opened a Wider Perspective about Its Real-life Application in the Modern World (3) Integration of History Developed a Sense of Purpose and an Appreciation of Mathematics Among Learners. Feedbacks solicited from the learners showed that they have understanding of the importance of studying Mathematics after they learned the life and contributions of Rene Descartes to Mathematics. Hence, integration of history plays a vital role in developing positive attitudes among learners towards Math.

Accuracy Improvement of DEM Using Ground Coordinates Package (공공삼각점 위치자료를 이용한 DEM의 위치 정확도 향상)

  • Lee, Hyoseong;Oh, Jaehong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.567-575
    • /
    • 2021
  • In order to correct the provided RPC and DEM generated from the high-resolution satellite images, the acquisition of the ground control point (GCP) must be preceded. This task is a very complicate that requires field surveys, GPS surveying, and image coordinate reading corresponding to GCPs. In addition, since it is difficult to set up and measure a GCP in areas where access is difficult or impossible (tidal flats, polar regions, volcanic regions, etc.), an alternative method is needed. In this paper, we propose a 3D surface matching technique using only the established ground coordinate package, avoiding the ground-image-location survey of the GCP to correct the DEM produced from WorldView-2 satellite images and the provided RPCs. The location data of the public control points were obtained from the National Geographic Information Institute website, and the DEM was corrected by performing 3D surface matching with this package. The accuracy of 3-axis translation and rotation obtained by the matching was evaluated using pre-measured GPS checkpoints. As a result, it was possible to obtain results within 2 m in the plane location and 1 m in height.

Three Degrees of Freedom Global Calibration Method for Measurement Systems with Binocular Vision

  • Xu, Guan;Zhang, Xinyuan;Li, Xiaotao;Su, Jian;Lu, Xue;Liu, Huanping;Hao, Zhaobing
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.1
    • /
    • pp.107-117
    • /
    • 2016
  • We develop a new method to globally calibrate the feature points that are derived from the binocular systems at different positions. A three-DOF (degree of freedom) global calibration system is established to move and rotate the 3D calibration board to an arbitrary position. A three-DOF global calibration model is constructed for the binocular systems at different positions. The three-DOF calibration model unifies the 3D coordinates of the feature points from different binocular systems into a unique world coordinate system that is determined by the initial position of the calibration board. Experiments are conducted on the binocular systems at the coaxial and diagonal positions. The experimental root-mean-square errors between the true and reconstructed 3D coordinates of the feature points are 0.573 mm, 0.520 mm and 0.528 mm at the coaxial positions. The experimental root-mean-square errors between the true and reconstructed 3D coordinates of the feature points are 0.495 mm, 0.556 mm and 0.627 mm at the diagonal positions. This method provides a global and accurate calibration to unity the measurement points of different binocular vision systems into the same world coordinate system.

Stereo Vision System Using Relative Stereo Disparity with Subpixel Resolution

  • Kim, Chi-Yen;Ahn, Cheol-Ki;Lee, Min-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.407-407
    • /
    • 2000
  • For acquisition of 3-Dimensional information in real space, stereo vision system is suitable. In the stereo system, 3D real world position is derived from translation of coordinates between cameras and world. Thus, to use stereo vision, it is needed to construct a precise system which provides kinematically precise translation between camera and world coordinate, in spite of intricacy and hardness. So much cost and time should be spent to build the system. In this paper, facilely to solve previous problem, a method which can easily obtain 3D informations using reference objects and RSD(Relative Stereo Disparity) is proposed. Instead of direct computation of position with translation of coordinates, only relative stereo disparity in stereo pair of image is used to find the reference depth of objects, and real 3D position is computed with initial condition of reference objects. In computation, subpixel resolution is involved to find the display for accuracy. To find the RSD, corresponding points are calculated in subpixel resolution. So the result in experiemnt will be shown that subpixel resolution is more accurate than 1 pixel resolution.

  • PDF