• Title/Summary/Keyword: point cloud model

Search Result 249, Processing Time 0.022 seconds

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Long-term shape sensing of bridge girders using automated ROI extraction of LiDAR point clouds

  • Ganesh Kolappan Geetha;Sahyeon Lee;Junhwa Lee;Sung-Han Sim
    • Smart Structures and Systems
    • /
    • v.33 no.6
    • /
    • pp.399-414
    • /
    • 2024
  • This study discusses the long-term deformation monitoring and shape sensing of bridge girder surfaces with an automated extraction scheme for point clouds in the Region Of Interest (ROI), invariant to the position of a Light Detection And Ranging system (LiDAR). Advanced smart construction necessitates continuous monitoring of the deformation and shape of bridge girders during the construction phase. An automated scheme is proposed for reconstructing geometric model of ROI in the presence of noisy non-stationary background. The proposed scheme involves (i) denoising irrelevant background point clouds using dimensions from the design model, (ii) extracting the outer boundaries of the bridge girder by transforming and processing the point cloud data in a two-dimensional image space, (iii) extracting topology of pre-defined targets using the modified Otsu method, (iv) registering the point clouds to a common reference frame or design coordinate using extracted predefined targets placed outside ROI, and (v) defining the bounding box in the point clouds using corresponding dimensional information of the bridge girder and abutments from the design model. The surface-fitted reconstructed geometric model in the ROI is superposed consistently over a long period to monitor bridge shape and derive deflection during the construction phase, which is highly correlated. The proposed scheme of combining 2D-3D with the design model overcomes the sensitivity of 3D point cloud registration to initial match, which often leads to a local extremum.

Variability-based Service Specification Method for Brokering Cloud Services (클라우드 서비스 중개를 위한 가변성 기반의 서비스 명세 기법)

  • An, Youngmin;Park, Joonseok;Yeom, Keunhyuk
    • KIISE Transactions on Computing Practices
    • /
    • v.20 no.12
    • /
    • pp.664-669
    • /
    • 2014
  • As the prevalence of cloud computing increases, various cloud service types have emerged, such as IaaS, PaaS, and SaaS. The growth and diversification of these cloud services has also resulted in the development of technology for cloud service brokers (CSBs), which serve as intermediate cloud services that can assist cloud tenants (users) in deploying services that fit their requirements. In order to broker cloud services, CSBs require the specification of structural models in order to facilitate the analysis and search for cloud services. In this study, we propose a variability-based service analysis model (SAM) that can be used to describe various cloud services. This model is based on the concept of variability in the software product line and represents the commonality and variability of cloud services by binding variants to each variation point that exists in the specification, quality, and pricing of the services. We also propose a virtual cloud bank architecture as a CSB that serves as an intermediate to provides tenants with appropriate cloud services based on the SAM.

CNN Based Human Activity Recognition System Using MIMO FMCW Radar (다중 입출력 FMCW 레이다를 활용한 합성곱 신경망 기반 사람 동작 인식 시스템)

  • Joon-sung Kim;Jae-yong Sim;Su-lim Jang;Seung-chan Lim;Yunho Jung
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.428-435
    • /
    • 2024
  • In this paper, a human activity regeneration (HAR) system based on multiple input multiple output frequency modulation continuous wave (MIMO FMCW) radar was designed and implemented. Using point cloud data from MIMO radar sensors has advantages in terms of privacy, safety, and accuracy. For the implementation of the HAR system, a customized neural network based on PointPillars and depthwise separate convolutional neural network (DS-CNN) was developed. By processing high-resolution point cloud data through a lightweight network, high accuracy and efficiency were achieved. As a result, the accuracy of 98.27% and the computational complexity of 11.27M multiply-accumulates (Macs) were achieved. In addition, the developed neural network model was implemented on Raspberry-Pi embedded system and it was confirmed that point cloud data can be processed at a speed of up to 8 fps.

Automatic Object Recognition in 3D Measuring Data (3차원 측정점으로부터의 객체 자동인식)

  • Ahn, Sung-Joon
    • The KIPS Transactions:PartB
    • /
    • v.16B no.1
    • /
    • pp.47-54
    • /
    • 2009
  • Automatic object recognition in 3D measuring data is of great interest in many application fields e.g. computer vision, reverse engineering and digital factory. In this paper we present a software tool for a fully automatic object detection and parameter estimation in unordered and noisy point clouds with a large number of data points. The software consists of three interactive modules each for model selection, point segmentation and model fitting, in which the orthogonal distance fitting (ODF) plays an important role. The ODF algorithms estimate model parameters by minimizing the square sum of the shortest distances between model feature and measurement points. The local quadric surface fitted through ODF to a randomly touched small initial patch of the point cloud provides the necessary initial information for the overall procedures of model selection, point segmentation and model fitting. The performance of the presented software tool will be demonstrated by applying to point clouds.

Comparison and Evaluation of Classification Accuracy for Pinus koraiensis and Larix kaempferi based on LiDAR Platforms and Deep Learning Models (라이다 플랫폼과 딥러닝 모델에 따른 잣나무와 낙엽송의 분류정확도 비교 및 평가)

  • Yong-Kyu Lee;Sang-Jin Lee;Jung-Soo Lee
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.2
    • /
    • pp.195-208
    • /
    • 2023
  • This study aimed to use three-dimensional point cloud data (PCD) obtained from Terrestrial Laser Scanning (TLS) and Mobile Laser Scanning (MLS) to evaluate a deep learning-based species classification model for two tree species: Pinus koraiensis and Larix kaempferi. Sixteen models were constructed based on the three conditions: LiDAR platform (TLS and MLS), down-sampling intensity (1024, 2048, 4096, 8192), and deep learning model (PointNet, PointNet++). According to the classification accuracy evaluation, the highest kappa coefficients were 93.7% for TLS and 96.9% for MLS when applied to PCD data from the PointNet++ model, with down-sampling intensities of 8192 and 2048, respectively. Furthermore, PointNet++ was consistently more accurate than PointNet in all scenarios sharing the same platform and down-sampling intensity. Misclassification occurred among individuals of different species with structurally similar characteristics, among individual trees that exhibited eccentric growth due to their location on slopes or around trails, and among some individual trees in which the crown was vertically divided during tree segmentation.

Automatic 3D Object Digitizing and Its Accuracy Using Point Cloud Data (점군집 데이터에 의한 3차원 객체도화의 자동화와 정확도)

  • Yoo, Eun-Jin;Yun, Seong-Goo;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.1-10
    • /
    • 2012
  • Recent spatial information technology has brought innovative improvement in both efficiency and accuracy. Especially, airborne LiDAR system(ALS) is one of the practical sensors to obtain 3D spatial information. Constructing reliable 3D spatial data infrastructure is world wide issue and most of the significant tasks involved with modeling manmade objects. This study aims to create a test data set for developing automatic building modeling methods by simulating point cloud data. The data simulates various roof types including gable, pyramid, dome, and combined polyhedron shapes. In this study, a robust bottom-up method to segment surface patches was proposed for generating building models automatically by determining model key points of the objects. The results show that building roofs composed of the segmented patches could be modeled by appropriate mathematical functions and the model key points. Thus, 3D digitizing man made objects could be automated for digital mapping purpose.

A Study on Point Cloud Generation Method from UAV Image Using Incremental Bundle Adjustment and Stereo Image Matching Technique (Incremental Bundle Adjustment와 스테레오 영상 정합 기법을 적용한 무인항공기 영상에서의 포인트 클라우드 생성방안 연구)

  • Rhee, Sooahm;Hwang, Yunhyuk;Kim, Soohyeon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.941-951
    • /
    • 2018
  • Utilization and demand of UAV (unmanned aerial vehicle) for the generation of 3D city model are increasing. In this study, we performed an experiment to adjustment position/orientation of UAV with incomplete attitude information and to extract point cloud data. In order to correct the attitude of the UAV, the rotation angle was calculated by using the continuous position information of UAV movements. Based on this, the corrected position/orientation information was obtained by applying IBA (Incremental Bundle Adjustment) based on photogrammetry. Each pair was transformed into an epipolar image, and the MDR (Multi-Dimensional Relaxation) technique was applied to obtain high precision DSM. Each extracted pair is aggregated and output in the form of a single point cloud or DSM. Using the DJI inspire1 and Phantom4 images, we can confirm that the point cloud can be extracted which expresses the railing of the building clearly. In the future, research will be conducted on improving the matching performance and establishing sensor models of oblique images. After that, we will continue the image processing technology for the generation of the 3D city model through the study of the extraction of 3D cloud It should be developed.

Automatic Building Modeling Method Using Planar Analysis of Point Clouds from Unmanned Aerial Vehicles (무인항공기에서 생성된 포인트 클라우드의 평면성 분석을 통한 자동 건물 모델 생성 기법)

  • Kim, Han-gyeol;Hwang, YunHyuk;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.973-985
    • /
    • 2019
  • In this paper, we propose a method to separate the ground and building areas and generate building models automatically through planarity analysis using UAV (Unmanned Aerial Vehicle) based point cloud. In this study, proposed method includes five steps. In the first step, the planes of the point cloud were extracted by analyzing the planarity of the input point cloud. In the second step, the extracted planes were analyzed to find a plane corresponding to the ground surface. Then, the points corresponding to the plane were removed from the point cloud. In the third step, we generate ortho-projected image from the point cloud ground surface removed. In the fourth step, the outline of each object was extracted from the ortho-projected image. Then, the non-building area was removed using the area, area / length ratio. Finally, the building's outer points were constructed using the building's ground height and the building's height. Then, 3D building models were created. In order to verify the proposed method, we used point clouds made using the UAV images. Through experiments, we confirmed that the 3D models of the building were generated automatically.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.