• Title/Summary/Keyword: Point cloud registration

Search Result 49, Processing Time 0.021 seconds

Multi Point Cloud Integration based on Observation Vectors between Stereo Images (스테레오 영상 간 관측 벡터에 기반한 다중 포인트 클라우드 통합)

  • Yoon, Wansang;Kim, Han-gyeol;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.727-736
    • /
    • 2019
  • In this paper, we present how to create a point cloud for a target area using multiple unmanned aerial vehicle images and to remove the gaps and overlapping points between datasets. For this purpose, first, IBA (Incremental Bundle Adjustment) technique was applied to correct the position and attitude of UAV platform. We generate a point cloud by using MDR (Multi-Dimensional Relaxation) matching technique. Next, we register point clouds based on observation vectors between stereo images by doing this we remove gaps between point clouds which are generated from different stereo pairs. Finally, we applied an occupancy grids based integration algorithm to remove duplicated points to create an integrated point cloud. The experiments were performed using UAV images, and our experiments show that it is possible to remove gaps and duplicate points between point clouds generated from different stereo pairs.

Feature-based Matching Algorithms for Registration between LiDAR Point Cloud Intensity Data Acquired from MMS and Image Data from UAV (MMS로부터 취득된 LiDAR 점군데이터의 반사강도 영상과 UAV 영상의 정합을 위한 특징점 기반 매칭 기법 연구)

  • Choi, Yoonjo;Farkoushi, Mohammad Gholami;Hong, Seunghwan;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.453-464
    • /
    • 2019
  • Recently, as the demand for 3D geospatial information increases, the importance of rapid and accurate data construction has increased. Although many studies have been conducted to register UAV (Unmanned Aerial Vehicle) imagery based on LiDAR (Light Detection and Ranging) data, which is capable of precise 3D data construction, studies using LiDAR data embedded in MMS (Mobile Mapping System) are insufficient. Therefore, this study compared and analyzed 9 matching algorithms based on feature points for registering reflectance image converted from LiDAR point cloud intensity data acquired from MMS with image data from UAV. Our results indicated that when the SIFT (Scale Invariant Feature Transform) algorithm was applied, it was able to stable secure a high matching accuracy, and it was confirmed that sufficient conjugate points were extracted even in various road environments. For the registration accuracy analysis, the SIFT algorithm was able to secure the accuracy at about 10 pixels except the case when the overlapping area is low and the same pattern is repeated. This is a reasonable result considering that the distortion of the UAV altitude is included at the time of UAV image capturing. Therefore, the results of this study are expected to be used as a basic research for 3D registration of LiDAR point cloud intensity data and UAV imagery.

A Comparison of 3D Reconstruction through the Passive and Pseudo-Active Acquisition of Images (수동 및 반자동 영상획득을 통한 3차원 공간복원의 비교)

  • Jeona, MiJeong;Kim, DuBeom;Chai, YoungHo
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.3-10
    • /
    • 2016
  • In this paper, two reconstructed point cloud sets with the information of 3D features are analyzed. For a certain 3D reconstruction of the interior of a building, the first image set is taken from the sequential passive camera movement along the regular grid path and the second set is from the application of the laser scanning process. Matched key points over all images are obtained by the SIFT(Scale Invariant Feature Transformation) algorithm and are used for the registration of the point cloud data. The obtained results are point cloud number, average density of point cloud and the generating time for point cloud. Experimental results show the necessity of images from the additional sensors as well as the images from the camera for the more accurate 3D reconstruction of the interior of a building.

Dynamic 3D Worker Pose Registration for Safety Monitoring in Manufacturing Environment based on Multi-domain Vision System (다중 도메인 비전 시스템 기반 제조 환경 안전 모니터링을 위한 동적 3D 작업자 자세 정합 기법)

  • Ji Dong Choi;Min Young Kim;Byeong Hak Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.6
    • /
    • pp.303-310
    • /
    • 2023
  • A single vision system limits the ability to accurately understand the spatial constraints and interactions between robots and dynamic workers caused by gantry robots and collaborative robots during production manufacturing. In this paper, we propose a 3D pose registration method for dynamic workers based on a multi-domain vision system for safety monitoring in manufacturing environments. This method uses OpenPose, a deep learning-based posture estimation model, to estimate the worker's dynamic two-dimensional posture in real-time and reconstruct it into three-dimensional coordinates. The 3D coordinates of the reconstructed multi-domain vision system were aligned using the ICP algorithm and then registered to a single 3D coordinate system. The proposed method showed effective performance in a manufacturing process environment with an average registration error of 0.0664 m and an average frame rate of 14.597 per second.

Camera Exterior Orientation for Image Registration onto 3D Data (3차원 데이터상에 영상등록을 위한 카메라 외부표정 계산)

  • Chon, Jae-Choon;Ding, Min;Shankar, Sastry
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.5
    • /
    • pp.375-381
    • /
    • 2007
  • A novel method to register images onto 3D data, such as 3D point cloud, 3D vectors, and 3D surfaces, is proposed. The proposed method estimates the exterior orientation of a camera with respective to the 3D data though fitting pairs of the normal vectors of two planes passing a focal point and 2D and 3D lines extracted from an image and the 3D data, respectively. The fitting condition is that the angle between each pair of the normal vectors has to be zero. This condition can be represented as a numerical formula using the inner product of the normal vectors. This paper demonstrates the proposed method can estimate the exterior orientation for the image registration as simulation tests.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

An Improved Registration Evaluation Method for Automating Point Cloud Registration System (포인트 클라우드 정합 시스템 자동화를 위한 개선된 정합 평가 방법)

  • Kim, Jongwook;Kim, Hyungmin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.308-310
    • /
    • 2020
  • 본 논문에서는 포인트 클라우드 정합 시스템 자동화를 위한 재정합 프로세스에서 정합의 실패 유무를 판단하는 기존의 정합 평가 방법을 개선한 방법을 제안한다. 포인트 클라우드 정합 자동화를 위해 정합의 실패를 판단하여 다시 정합하는 재정합 프로세스는 자동화 시스템에서 필수적인 요소이다. 기존의 정합 평가 방법은 정합하고자하는 두 포인트 클라우드의 점의 간격이나 데이터의 양이 다를 경우 계산된 정합 오차가 정성적인 결과와는 다르게 측정되는 문제가 발생하는데, 이는 재정합 프로세스에서 치명적인 오류를 초래한다. 제안하는 방법은 참조 포인트 클라우드에서 가장 인접한 목표 포인트 클라우드의 세 점이 이루는 평면과의 수직 거리를 계산하고, 일정 거리 임계치를 만족하는 점들의 개수를 측정해 계산된 오차를 검증하여 정합 오판단율을 효과적으로 감소시켰다.

  • PDF

Real-time Localization of An UGV based on Uniform Arc Length Sampling of A 360 Degree Range Sensor (전방향 거리 센서의 균일 원호길이 샘플링을 이용한 무인 이동차량의 실시간 위치 추정)

  • Park, Soon-Yong;Choi, Sung-In
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.114-122
    • /
    • 2011
  • We propose an automatic localization technique based on Uniform Arc Length Sampling (UALS) of 360 degree range sensor data. The proposed method samples 3D points from dense a point-cloud which is acquired by the sensor, registers the sampled points to a digital surface model(DSM) in real-time, and determines the location of an Unmanned Ground Vehicle(UGV). To reduce the sampling and registration time of a sequence of dense range data, 3D range points are sampled uniformly in terms of ground sample distance. Using the proposed method, we can reduce the number of 3D points while maintaining their uniformity over range data. We compare the registration speed and accuracy of the proposed method with a conventional sample method. Through several experiments by changing the number of sampling points, we analyze the speed and accuracy of the proposed method.

HK Curvature Descriptor-Based Surface Registration Method Between 3D Measurement Data and CT Data for Patient-to-CT Coordinate Matching of Image-Guided Surgery (영상 유도 수술의 환자 및 CT 데이터 좌표계 정렬을 위한 HK 곡률 기술자 기반 표면 정합 방법)

  • Kwon, Ki-Hoon;Lee, Seung-Hyun;Kim, Min Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.597-602
    • /
    • 2016
  • In image guided surgery, a patient registration process is a critical process for the successful operation, which is required to use pre-operative images such as CT and MRI during operation. Though several patient registration methods have been studied, we concentrate on one method that utilizes 3D surface measurement data in this paper. First, a hand-held 3D surface measurement device measures the surface of the patient, and secondly this data is matched with CT or MRI data using optimization algorithms. However, generally used ICP algorithm is very slow without a proper initial location and also suffers from local minimum problem. Usually, this problem is solved by manually providing the proper initial location before performing ICP. But, it has a disadvantage that an experience user has to perform the method and also takes a long time. In this paper, we propose a method that can accurately find the proper initial location automatically. The proposed method finds the proper initial location for ICP by converting 3D data to 2D curvature images and performing image matching. Curvature features are robust to the rotation, translation, and even some deformation. Also, the proposed method is faster than traditional methods because it performs 2D image matching instead of 3D point cloud matching.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.