• 제목/요약/키워드: 3D Point clouds

검색결과 117건 처리시간 0.024초

Pointwise CNN for 3D Object Classification on Point Cloud

  • Song, Wei;Liu, Zishu;Tian, Yifei;Fong, Simon
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.787-800
    • /
    • 2021
  • Three-dimensional (3D) object classification tasks using point clouds are widely used in 3D modeling, face recognition, and robotic missions. However, processing raw point clouds directly is problematic for a traditional convolutional network due to the irregular data format of point clouds. This paper proposes a pointwise convolution neural network (CNN) structure that can process point cloud data directly without preprocessing. First, a 2D convolutional layer is introduced to percept coordinate information of each point. Then, multiple 2D convolutional layers and a global max pooling layer are applied to extract global features. Finally, based on the extracted features, fully connected layers predict the class labels of objects. We evaluated the proposed pointwise CNN structure on the ModelNet10 dataset. The proposed structure obtained higher accuracy compared to the existing methods. Experiments using the ModelNet10 dataset also prove that the difference in the point number of point clouds does not significantly influence on the proposed pointwise CNN structure.

Accuracy Comparison Between Image-based 3D Reconstruction Technique and Terrestrial LiDAR for As-built BIM of Outdoor Structures

  • Lee, Jisang;Hong, Seunghwan;Cho, Hanjin;Park, Ilsuk;Cho, Hyoungsig;Sohn, Hong-Gyoo
    • 한국측량학회지
    • /
    • 제33권6호
    • /
    • pp.557-567
    • /
    • 2015
  • With the increasing demands of 3D spatial information in urban environment, the importance of point clouds generation techniques have been increased. In particular, for as-built BIM, the point clouds with the high accuracy and density is required to describe the detail information of building components. Since the terrestrial LiDAR has high performance in terms of accuracy and point density, it has been widely used for as-built 3D modelling. However, the high cost of devices is obstacle for general uses, and the image-based 3D reconstruction technique is being a new attraction as an alternative solution. This paper compares the image-based 3D reconstruction technique and the terrestrial LiDAR in point of establishing the as-built BIM of outdoor structures. The point clouds generated from the image-based 3D reconstruction technique could roughly present the 3D shape of a building, but could not precisely express detail information, such as windows, doors and a roof of building. There were 13.2~28.9 cm of RMSE between the terrestrial LiDAR scanning data and the point clouds, which generated from smartphone and DSLR camera images. In conclusion, the results demonstrate that the image-based 3D reconstruction can be used in drawing building footprint and wireframe, and the terrestrial LiDAR is suitable for detail 3D outdoor modeling.

생성적 적대 신경망 기반 3차원 포인트 클라우드 향상 기법 (3D Point Cloud Enhancement based on Generative Adversarial Network)

  • Moon, HyungDo;Kang, Hoonjong;Jo, Dongsik
    • 한국정보통신학회논문지
    • /
    • 제25권10호
    • /
    • pp.1452-1455
    • /
    • 2021
  • Recently, point clouds are generated by capturing real space in 3D, and it is actively applied and serviced for performances, exhibitions, education, and training. These point cloud data require post-correction work to be used in virtual environments due to errors caused by the capture environment with sensors and cameras. In this paper, we propose an enhancement technique for 3D point cloud data by applying generative adversarial network(GAN). Thus, we performed an approach to regenerate point clouds as an input of GAN. Through our method presented in this paper, point clouds with a lot of noise is configured in the same shape as the real object and environment, enabling precise interaction with the reconstructed content.

ASPPMVSNet: A high-receptive-field multiview stereo network for dense three-dimensional reconstruction

  • Saleh Saeed;Sungjun Lee;Yongju Cho;Unsang Park
    • ETRI Journal
    • /
    • 제44권6호
    • /
    • pp.1034-1046
    • /
    • 2022
  • The learning-based multiview stereo (MVS) methods for three-dimensional (3D) reconstruction generally use 3D volumes for depth inference. The quality of the reconstructed depth maps and the corresponding point clouds is directly influenced by the spatial resolution of the 3D volume. Consequently, these methods produce point clouds with sparse local regions because of the lack of the memory required to encode a high volume of information. Here, we apply the atrous spatial pyramid pooling (ASPP) module in MVS methods to obtain dense feature maps with multiscale, long-range, contextual information using high receptive fields. For a given 3D volume with the same spatial resolution as that in the MVS methods, the dense feature maps from the ASPP module encoded with superior information can produce dense point clouds without a high memory footprint. Furthermore, we propose a 3D loss for training the MVS networks, which improves the predicted depth values by 24.44%. The ASPP module provides state-of-the-art qualitative results by constructing relatively dense point clouds, which improves the DTU MVS dataset benchmarks by 2.25% compared with those achieved in the previous MVS methods.

Dense Thermal 3D Point Cloud Generation of Building Envelope by Drone-based Photogrammetry

  • Jo, Hyeon Jeong;Jang, Yeong Jae;Lee, Jae Wang;Oh, Jae Hong
    • 한국측량학회지
    • /
    • 제39권2호
    • /
    • pp.73-79
    • /
    • 2021
  • Recently there are growing interests on the energy conservation and emission reduction. In the fields of architecture and civil engineering, the energy monitoring of structures is required to response the energy issues. In perspective of thermal monitoring, thermal images gains popularity for their rich visual information. With the rapid development of the drone platform, aerial thermal images acquired using drone can be used to monitor not only a part of structure, but wider coverage. In addition, the stereo photogrammetric process is expected to generate 3D point cloud with thermal information. However thermal images show very poor in resolution with narrow field of view that limit the use of drone-based thermal photogrammety. In the study, we aimed to generate 3D thermal point cloud using visible and thermal images. The visible images show high spatial resolution being able to generate precise and dense point clouds. Then we extract thermal information from thermal images to assign them onto the point clouds by precisely establishing photogrammetric collinearity between the point clouds and thermal images. From the experiment, we successfully generate dense 3D thermal point cloud showing 3D thermal distribution over the building structure.

포인트 클라우드 자료의 도심지 Geo-Referencing 방안 연구 (Research on Geo-Referencing Methodology of Point Clouds Data in Urban Area)

  • 조형식;손홍규;한수희;황새미나
    • 한국측량학회:학술대회논문집
    • /
    • 한국측량학회 2010년 춘계학술발표회 논문집
    • /
    • pp.285-287
    • /
    • 2010
  • It is recently enlarged to necessity of 3D spatial information model in urban areas. and in order to that, It is increased to use the terrestrial LiDAR. The Point clouds which are received by terrestrial LiDAR take a relateive coordinate. For transform into absolute coordinate, it carry out GPS surveying. However, it is difficult to geo-referencing of point clouds using the GPS due to high buildings and facilities in urban area. This study suggests a methodology, that is geo-referencing of point clouds which is received from terresstrial LiDAR in urban area and then verified accuracy of geo-referencing of point clouds. In order to geo-Referencing of point clouds which are received in Engineering building of Yonsei Univ., it was be setout through GPS surveying, and then obtained absolute coordinate of real building. Using this coordinate, It was operated geo-referencing of point clouds, verified accuracy between check point and geo-referenced point clouds. As a result, RMSE of check point shows that GPS surveying is 6.9~8.0cm.

  • PDF

Reconstruction of polygonal prisms from point-clouds of engineering facilities

  • Chida, Akisato;Masuda, Hiroshi
    • Journal of Computational Design and Engineering
    • /
    • 제3권4호
    • /
    • pp.322-329
    • /
    • 2016
  • The advent of high-performance terrestrial laser scanners has made it possible to capture dense point-clouds of engineering facilities. 3D shape acquisition from engineering facilities is useful for supporting maintenance and repair tasks. In this paper, we discuss methods to reconstruct box shapes and polygonal prisms from large-scale point-clouds. Since many faces may be partly occluded by other objects in engineering plants, we estimate possible box shapes and polygonal prisms and verify their compatibility with measured point-clouds. We evaluate our method using actual point-clouds of engineering plants.

Long-term shape sensing of bridge girders using automated ROI extraction of LiDAR point clouds

  • Ganesh Kolappan Geetha;Sahyeon Lee;Junhwa Lee;Sung-Han Sim
    • Smart Structures and Systems
    • /
    • 제33권6호
    • /
    • pp.399-414
    • /
    • 2024
  • This study discusses the long-term deformation monitoring and shape sensing of bridge girder surfaces with an automated extraction scheme for point clouds in the Region Of Interest (ROI), invariant to the position of a Light Detection And Ranging system (LiDAR). Advanced smart construction necessitates continuous monitoring of the deformation and shape of bridge girders during the construction phase. An automated scheme is proposed for reconstructing geometric model of ROI in the presence of noisy non-stationary background. The proposed scheme involves (i) denoising irrelevant background point clouds using dimensions from the design model, (ii) extracting the outer boundaries of the bridge girder by transforming and processing the point cloud data in a two-dimensional image space, (iii) extracting topology of pre-defined targets using the modified Otsu method, (iv) registering the point clouds to a common reference frame or design coordinate using extracted predefined targets placed outside ROI, and (v) defining the bounding box in the point clouds using corresponding dimensional information of the bridge girder and abutments from the design model. The surface-fitted reconstructed geometric model in the ROI is superposed consistently over a long period to monitor bridge shape and derive deflection during the construction phase, which is highly correlated. The proposed scheme of combining 2D-3D with the design model overcomes the sensitivity of 3D point cloud registration to initial match, which often leads to a local extremum.

가우시안 혼합모델 기반 3차원 차량 모델을 이용한 복잡한 도시환경에서의 정확한 주차 차량 검출 방법 (Accurate Parked Vehicle Detection using GMM-based 3D Vehicle Model in Complex Urban Environments)

  • 조영근;노현철;정명진
    • 로봇학회논문지
    • /
    • 제10권1호
    • /
    • pp.33-41
    • /
    • 2015
  • Recent developments in robotics and intelligent vehicle area, bring interests of people in an autonomous driving ability and advanced driving assistance system. Especially fully automatic parking ability is one of the key issues of intelligent vehicles, and accurate parked vehicles detection is essential for this issue. In previous researches, many types of sensors are used for detecting vehicles, 2D LiDAR is popular since it offers accurate range information without preprocessing. The L shape feature is most popular 2D feature for vehicle detection, however it has an ambiguity on different objects such as building, bushes and this occurs misdetection problem. Therefore we propose the accurate vehicle detection method by using a 3D complete vehicle model in 3D point clouds acquired from front inclined 2D LiDAR. The proposed method is decomposed into two steps: vehicle candidate extraction, vehicle detection. By combination of L shape feature and point clouds segmentation, we extract the objects which are highly related to vehicles and apply 3D model to detect vehicles accurately. The method guarantees high detection performance and gives plentiful information for autonomous parking. To evaluate the method, we use various parking situation in complex urban scene data. Experimental results shows the qualitative and quantitative performance efficiently.