• Title/Summary/Keyword: Image-Based Point Cloud

Search Result 109, Processing Time 0.026 seconds

Extraction and Utilization of DEM based on UAV Photogrammetry for Flood Trace Investigation and Flood Prediction (침수흔적조사를 위한 UAV 사진측량 기반 DEM의 추출 및 활용)

  • Jung-Sik PARK;Yong-Jin CHOI;Jin-Duk LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.4
    • /
    • pp.237-250
    • /
    • 2023
  • Orthophotos and DEMs were generated by UAV-based aerial photogrammetry and an attempt was made to apply them to detailed investigations for the production of flood traces. The cultivated area located in Goa-eup, Gumi, where the embankment collapsed and inundated inundation occurred due to the impact of 6th Typhoon Sanba in 2012, was selected as rhe target area. To obtain optimal accuracy of UAV photogrammetry performance, the UAV images were taken under the optimal placement of 19 GCPs and then point cloud, DEM, and orthoimages were generated through image processing using Pix4Dmapper software. After applying CloudCompare's CSF Filtering to separate the point cloud into ground elements and non-ground elements, a finally corrected DEM was created using only non-ground elements in GRASS GIS software. The flood level and flood depth data extracted from the final generated DEM were compared and presented with the flood level and flood depth data from existing data as of 2012 provided through the public data portal site of the Korea Land and Geospatial Informatix Corporation(LX).

Semi-automatic Extraction of 3D Building Boundary Using DSM from Stereo Images Matching (영상 매칭으로 생성된 DSM을 이용한 반자동 3차원 건물 외곽선 추출 기법 개발)

  • Kim, Soohyeon;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1067-1087
    • /
    • 2018
  • In a study for LiDAR data based building boundary extraction, usually dense point cloud was used to cluster building rooftop area and extract building outline. However, when we used DSM generated from stereo image matching to extract building boundary, it is not trivial to cluster building roof top area automatically due to outliers and large holes of point cloud. Thus, we propose a technique to extract building boundary semi-automatically from the DSM created from stereo images. The technique consists of watershed segmentation for using user input as markers and recursive MBR algorithm. Since the proposed method only inputs simple marker information that represents building areas within the DSM, it can create building boundary efficiently by minimizing user input.

Elevator Recognition and Position Estimation based on RGB-D Sensor for Safe Elevator Boarding (이동로봇의 안전한 엘리베이터 탑승을 위한 RGB-D 센서 기반의 엘리베이터 인식 및 위치추정)

  • Jang, Min-Gyung;Jo, Hyun-Jun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.70-76
    • /
    • 2020
  • Multi-floor navigation of a mobile robot requires a technology that allows the robot to safely get on and off the elevator. Therefore, in this study, we propose a method of recognizing the elevator from the current position of the robot and estimating the location of the elevator locally so that the robot can safely get on the elevator regardless of the accumulated position error during autonomous navigation. The proposed method uses a deep learning-based image classifier to identify the elevator from the image information obtained from the RGB-D sensor and extract the boundary points between the elevator and the surrounding wall from the point cloud. This enables the robot to estimate the reliable position in real time and boarding direction for general elevators. Various experiments exhibit the effectiveness and accuracy of the proposed method.

3D Modeling Product Design Process Based on Photo Scanning Technology (포토 스캐닝 기술을 기반으로 한 3D 모델링 제품디자인 프로세스에 관한 연구)

  • Lee, Junsang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.11
    • /
    • pp.1505-1510
    • /
    • 2018
  • Product modeling technology for graphics is rapidly developing. And 3D data application and usability are increasing.modeling of product design is a very important factor in constructing. 3D modeling in product design takes a lot of production time. Recently, the reverse design method is very useful because of application of 3D data and shortening of production time. In this study, first, 3D point cloud and mesh data are generated using photographs based on image data. The second is to modify the design and the third is to make the prototype with the 3D printer. This product design and production process suggests the utilization and possibility of image data, the shortening of 3D modeling production time and efficient processes. Also, the product design process proposes a model of a new product development system to adapt to the production environment.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.

Automatic Classification of Bridge Component based on Deep Learning (딥러닝 기반 교량 구성요소 자동 분류)

  • Lee, Jae Hyuk;Park, Jeong Jun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.239-245
    • /
    • 2020
  • Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.

Multi-facet 3D Scanner Based on Stripe Laser Light Image (선형 레이저 광 영상기반 다면 3 차원 스캐너)

  • Ko, Young-Jun;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.811-816
    • /
    • 2016
  • In light of recently developed 3D printers for rapid prototyping, there is increasing attention on the 3D scanner as a 3D data acquisition system for an existing object. This paper presents a prototypical 3D scanner based on a striped laser light image. In order to solve the problem of shadowy areas, the proposed 3D scanner has two cameras with one laser light source. By using a horizontal rotation table and a rotational arm rotating about the latitudinal axis, the scanner is able to scan in all directions. To remove an additional optical filter for laser light pixel extraction of an image, we have adopted a differential image method with laser light modulation. Experimental results show that the scanner's 3D data acquisition performance exhibited less than 0.2 mm of measurement error. Therefore, this scanner has proven that it is possible to reconstruct an object's 3D surface from point cloud data using a 3D scanner, enabling reproduction of the object using a commercially available 3D printer.

Accuracy Analysis of Satellite Imagery in Road Construction Site Using UAV (도로 토목 공사 현장에서 UAV를 활용한 위성 영상 지도의 정확도 분석)

  • Shin, Seung-Min;Ban, Chang-Woo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.753-762
    • /
    • 2021
  • Google provides mapping services using satellite imagery, this is widely used for the study. Since about 20 years ago, research and business using drones have been expanding. Pix4D is widely used to create 3D information models using drones. This study compared the distance error by comparing the result of the road construction site with the DSM data of Google Earth and Pix4 D. Through this, we tried to understand the reliability of the result of distance measurement in Google Earth. A DTM result of 3.08 cm/pixel was obtained as a result of matching with 49666 key points for each image. The length and altitude of Pix4D and Google Earth were measured and compared using the obtained PCD. As a result, the average error of the distance based on the data of Pix4D was measured to be 0.68 m, confirming that the error was relatively small. As a result of measuring the altitude of Google Earth and Pix4D and comparing them, it was confirmed that the maximum error was 83.214m, which was measured using satellite images, but the error was quite large and there was inaccuracy. Through this, it was confirmed that there are difficulties in analyzing and acquiring data at road construction sites using Google Earth, and the result was obtained that point cloud data using drones is necessary.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Multiple Camera Calibration for Panoramic 3D Virtual Environment (파노라믹 3D가상 환경 생성을 위한 다수의 카메라 캘리브레이션)

  • 김세환;김기영;우운택
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.137-148
    • /
    • 2004
  • In this paper, we propose a new camera calibration method for rotating multi-view cameras to generate image-based panoramic 3D Virtual Environment. Since calibration accuracy worsens with an increase in distance between camera and calibration pattern, conventional camera calibration algorithms are not proper for panoramic 3D VE generation. To remedy the problem, a geometric relationship among all lenses of a multi-view camera is used for intra-camera calibration. Another geometric relationship among multiple cameras is used for inter-camera calibration. First camera parameters for all lenses of each multi-view camera we obtained by applying Tsai's algorithm. In intra-camera calibration, the extrinsic parameters are compensated by iteratively reducing discrepancy between estimated and actual distances. Estimated distances are calculated using extrinsic parameters for every lens. Inter-camera calibration arranges multiple cameras in a geometric relationship. It exploits Iterative Closet Point (ICP) algorithm using back-projected 3D point clouds. Finally, by repeatedly applying intra/inter-camera calibration to all lenses of rotating multi-view cameras, we can obtain improved extrinsic parameters at every rotated position for a middle-range distance. Consequently, the proposed method can be applied to stitching of 3D point cloud for panoramic 3D VE generation. Moreover, it may be adopted in various 3D AR applications.