• Title/Summary/Keyword: LiDAR point cloud data

Search Result 100, Processing Time 0.026 seconds

Automatic 3D Object Digitizing and Its Accuracy Using Point Cloud Data (점군집 데이터에 의한 3차원 객체도화의 자동화와 정확도)

  • Yoo, Eun-Jin;Yun, Seong-Goo;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.1-10
    • /
    • 2012
  • Recent spatial information technology has brought innovative improvement in both efficiency and accuracy. Especially, airborne LiDAR system(ALS) is one of the practical sensors to obtain 3D spatial information. Constructing reliable 3D spatial data infrastructure is world wide issue and most of the significant tasks involved with modeling manmade objects. This study aims to create a test data set for developing automatic building modeling methods by simulating point cloud data. The data simulates various roof types including gable, pyramid, dome, and combined polyhedron shapes. In this study, a robust bottom-up method to segment surface patches was proposed for generating building models automatically by determining model key points of the objects. The results show that building roofs composed of the segmented patches could be modeled by appropriate mathematical functions and the model key points. Thus, 3D digitizing man made objects could be automated for digital mapping purpose.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Effect of Learning Data on the Semantic Segmentation of Railroad Tunnel Using Deep Learning (딥러닝을 활용한 철도 터널 객체 분할에 학습 데이터가 미치는 영향)

  • Ryu, Young-Moo;Kim, Byung-Kyu;Park, Jeongjun
    • Journal of the Korean Geotechnical Society
    • /
    • v.37 no.11
    • /
    • pp.107-118
    • /
    • 2021
  • Scan-to-BIM can be precisely mod eled by measuring structures with Light Detection And Ranging (LiDAR) and build ing a 3D BIM (Building Information Modeling) model based on it, but has a limitation in that it consumes a lot of manpower, time, and cost. To overcome these limitations, studies are being conducted to perform semantic segmentation of 3D point cloud data applying deep learning algorithms, but studies on how segmentation result changes depending on learning data are insufficient. In this study, a parametric study was conducted to determine how the size and track type of railroad tunnels constituting learning data affect the semantic segmentation of railroad tunnels through deep learning. As a result of the parametric study, the similar size of the tunnels used for learning and testing, the higher segmentation accuracy, and the better results when learning through a double-track tunnel than a single-line tunnel. In addition, when the training data is composed of two or more tunnels, overall accuracy (OA) and mean intersection over union (MIoU) increased by 10% to 50%, it has been confirmed that various configurations of learning data can contribute to efficient learning.

Edge Extraction Algorithm for Mesh Data Based on Graph-cut Method and Principal Component Analysis (Graph-cut 과 주성분 분석을 이용한 Mesh 의 Edge 추출 알고리즘)

  • Han, HyeonDeok;Kim, HaeKwang;Han, Jong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.204-207
    • /
    • 2021
  • LiDAR 장비 및 SfM 과 MVS 방법을 이용하여 생성된 point cloud 와 mesh 에는 항상 노이즈가 포함되어 있다. 이러한 노이즈를 제거하기 위해선 노이즈와 edge 를 효과적으로 구분해낼 수 있어야 한다. 노이즈를 제거하기 위해 mesh 로부터 edge 를 먼저 구분해낸 후 edge 에 해당하는 영역과 평면에 해당하는 영역에 서로 다른 필터를 사용하는 많은 연구들이 있지만 강한 노이즈가 포함된 mesh 에서는 edge를 잘 구분해내지 못하는 문제가 존재한다. 이러한 방법들은 mesh 로부터 edge 를 구분해내는 알고리즘의 성능이 노이즈를 제거하는 전체 알고리즘의 성능에 큰 영향을 주기 때문에 강한 노이즈에서도 edge 를 잘 구분해낼 수 있는 알고리즘이 필요하다. 본 논문에서는 PCA 와 graph-cut 을 이용하여 강한 노이즈가 포함된 mesh 에서 edge 영역을 추출하는 알고리즘을 제안한다.

  • PDF

Update of Digital Map by using The Terrestrial LiDAR Data and Modified RANSAC (수정된 RANSAC 알고리즘과 지상라이다 데이터를 이용한 수치지도 건물레이어 갱신)

  • Kim, Sang Min;Jung, Jae Hoon;Lee, Jae Bin;Heo, Joon;Hong, Sung Chul;Cho, Hyoung Sig
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.3-11
    • /
    • 2014
  • Recently, rapid urbanization has necessitated continuous updates in digital map to provide the latest and accurate information for users. However, conventional aerial photogrammetry has some restrictions on periodic updates of small areas due to high cost, and as-built drawing also brings some problems with maintaining quality. Alternatively, this paper proposes a scheme for efficient and accurate update of digital map using point cloud data acquired by Terrestrial Laser Scanner (TLS). Initially, from the whole point cloud data, the building sides are extracted and projected onto a 2D image to trace out the 2D building footprints. In order to register the footprint extractions on the digital map, 2D Affine model is used. For Affine parameter estimation, the centroids of each footprint groups are randomly chosen and matched by means of a modified RANSAC algorithm. Based on proposed algorithm, the experimental results showed that it is possible to renew digital map using building footprint extracted from TLS data.

Identifying Considerations for Developing SLAM-based Mobile Scan Backpack System for Rapid Building Scanning (신속한 건축물 스캔을 위한 SLAM기반 이동형 스캔백팩 시스템 개발 고려사항 도출)

  • Kang, Tae-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.312-320
    • /
    • 2020
  • 3D scanning began in the field of manufacturing. In the construction field, a BIM (Building Information Modeling)-based 3D modeling environment was developed and used for the overall construction, such as factory prefabrication, structure construction inspection, plant facility, bridge, tunnel structure inspection using 3D scanning technology. LiDARs have higher accuracy and density than mobile scanners but require longer registration times and data processing. On the other hand, in interior building space management, relatively high accuracy is not needed, and the user can conveniently move with a mobile scan system. This study derives considerations for the development of Simultaneous Localization and Mapping (SLAM)-based Scan Backpack systems that move freely and support real-time point cloud registration. This paper proposes the mobile scan system, framework, and component structure to derive the considerations and improve scan productivity. Prototype development was carried out in two stages, SLAM and ScanBackpack, to derive the considerations and analyze the results.

Automatic Extraction of River Levee Slope Using MMS Point Cloud Data (MMS 포인트 클라우드를 활용한 하천제방 경사도 자동 추출에 관한 연구)

  • Kim, Cheolhwan;Lee, Jisang;Choi, Wonjun;Kim, Wondae;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1425-1434
    • /
    • 2021
  • Continuous and periodic data acquisition must be preceded to maintain and manage the river facilities effectively. Adapting the existing general facilities methods, which include river surveying methods such as terrestrial laser scanners, total stations, and Global Navigation Satellite System (GNSS), has limitation in terms of its costs, manpower, and times to acquire spatial information since the river facilities are distributed across the wide and long area. On the other hand, the Mobile Mapping System (MMS) has comparative advantage in acquiring the data of river facilities since it constructs three-dimensional spatial information while moving. By using the MMS, 184,646,009 points could be attained for Anyang stream with a length of 4 kilometers only in 20 minutes. Levee points were divided at intervals of 10 meters so that about 378 levee cross sections were generated. In addition, the waterside maximum and average slope could be automatically calculated by separating slope plane form levee point cloud, and the accuracy of RMSE was confirmed by comparing with manually calculated slope. The reference slope was calculated manually by plotting point cloud of levee slope plane and selecting two points that use location information when calculating the slope. Also, as a result of comparing the water side slope with slope standard in basic river plan for Anyang stream, it is confirmed that inspecting the river facilities with the MMS point cloud is highly recommended than the existing river survey.

Vehicle Detection Method Based on Object-Based Point Cloud Analysis Using Vertical Elevation Data (OBPCA 기반의 수직단면 이용 차량 추출 기법)

  • Jeon, Junbeom;Lee, Heezin;Oh, Sangyoon;Lee, Minsu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.8
    • /
    • pp.369-376
    • /
    • 2016
  • Among various vehicle extraction techniques, OBPCA (Object-Based Point Cloud Analysis) calculates features quickly by coarse-grained rectangles from top-view of the vehicle candidates. However, it uses only a top-view rectangle to detect a vehicle. Thus, it is hard to extract rectangular objects with similar size. For this reason, accuracy issue has raised on the OBPCA method which influences on DEM generation and traffic monitoring tasks. In this paper, we propose a novel method which uses the most distinguishing vertical elevations to calculate additional features. Our proposed method uses same features with top-view, determines new thresholds, and decides whether the candidate is vehicle or not. We compared the accuracy and execution time between original OBPCA and the proposed one. The experiment result shows that our method produces 6.61% increase of precision and 13.96% decrease of false positive rate despite with marginal increase of execution time. We can see that the proposed method can reduce misclassification.

Considerations for Developing a SLAM System for Real-time Remote Scanning of Building Facilities (건축물 실시간 원격 스캔을 위한 SLAM 시스템 개발 시 고려사항)

  • Kang, Tae-Wook
    • Journal of KIBIM
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • In managing building facilities, spatial information is the basic data for decision making. However, the method of acquiring spatial information is not easy. In many cases, the site and drawings are often different due to changes in facilities and time after construction. In this case, the site data should be scanned to obtain spatial information. The scan data actually contains spatial information, which is a great help in making space related decisions. However, to obtain scan data, an expensive LiDAR (Light Detection and Ranging) device must be purchased, and special software for processing data obtained from the device must be available.Recently, SLAM (Simultaneous localization and mapping), an advanced map generation technology, has been spreading in the field of robotics. Using SLAM, 3D spatial information can be obtained quickly in real time without a separate matching process. This study develops and tests whether SLAM technology can be used to obtain spatial information for facility management. This draws considerations for developing a SLAM device for real-time remote scanning for facility management. However, this study focuses on the system development method that acquires spatial information necessary for facility management through SLAM technology. To this end, we develop a prototype, analyze the pros and cons, and then suggest considerations for developing a SLAM system.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.