• Title/Summary/Keyword: LiDAR-Inertial SLAM

Search Result 6, Processing Time 0.018 seconds

Tightly-Coupled GNSS-LiDAR-Inertial State Estimator for Mapping and Autonomous Driving (비정형 환경 내 지도 작성과 자율주행을 위한 GNSS-라이다-관성 상태 추정 시스템)

  • Hyeonjae Gil;Dongjae Lee;Gwanhyeong Song;Seunguk Ahn;Ayoung Kim
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.72-81
    • /
    • 2023
  • We introduce tightly-coupled GNSS-LiDAR-Inertial state estimator, which is capable of SLAM (Simultaneously Localization and Mapping) and autonomous driving. Long term drift is one of the main sources of estimation error, and some LiDAR SLAM framework utilize loop closure to overcome this error. However, when loop closing event happens, one's current state could change abruptly and pose some safety issues on drivers. Directly utilizing GNSS (Global Navigation Satellite System) positioning information could help alleviating this problem, but accurate information is not always available and inaccurate vertical positioning issues still exist. We thus propose our method which tightly couples raw GNSS measurements into LiDAR-Inertial SLAM framework which can handle satellite positioning information regardless of its uncertainty. Also, with NLOS (Non-light-of-sight) satellite signal handling, we can estimate our states more smoothly and accurately. With several autonomous driving tests on AGV (Autonomous Ground Vehicle), we verified that our method can be applied to real-world problem.

Comparison of Characteristics of Drone LiDAR for Construction of Geospatial Information in Large-scale Development Project Area (대규모 개발지역의 공간정보 구축을 위한 드론 라이다의 특징 비교)

  • Park, Joon-Kyu;Lee, Keun-Wang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.1
    • /
    • pp.768-773
    • /
    • 2020
  • In large-scale land development for the rational use and management of national land resources, the use of geospatial information is essential for the efficient management of projects. Recently, drone LiDAR (Light Detection And Ranging) has attracted attention as an effective geospatial information construction technique for large-scale development areas, such as housing site construction and open-pit mines. Drone LiDAR can be classified into a method using SLAM (Simultaneous Localization And Mapping) technology and a GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit) method. On the other hand, there is a lack of analytical research on the application of drone LiDAR or the characteristics of each method. Therefore, in this study, data acquisition, processing, and analysis using SLAM and GNSS/IMU type drone LiDAR were performed, and the characteristics and utilization of each were evaluated. As a result, the height direction accuracy of drone LiDAR was -0.052~0.044m, which satisfies the allowable accuracy of geospatial information for mapping. In addition, the characteristics of each method were presented through a comparison of data acquisition and processing. Geospatial information constructed through drone LiDAR can be used in several ways, such as measuring the distance, area, and inclination. Based on such information, it is possible to evaluate the safety of large-scale development areas, and this method is expected to be utilized in the future.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

Intensity and Ambient Enhanced Lidar-Inertial SLAM for Unstructured Construction Environment (비정형의 건설환경 매핑을 위한 레이저 반사광 강도와 주변광을 활용한 향상된 라이다-관성 슬램)

  • Jung, Minwoo;Jung, Sangwoo;Jang, Hyesu;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.179-188
    • /
    • 2021
  • Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is challenging due to the characteristics of an unstructured environment. For example, irregular feature points and matching prohibit creating a map for management. To tackle this issue, we propose a system for data acquisition in unstructured environment and a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping, IA-LIO-SAM, that achieves highly accurate robot trajectories and mapping. IA-LIO-SAM utilizes a factor graph same as Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM). Enhancing the existing LIO-SAM, IA-LIO-SAM leverages point's intensity and ambient value to remove unnecessary feature points. These additional values also perform as a new factor of the K-Nearest Neighbor algorithm (KNN), allowing accurate comparisons between stored points and scanned points. The performance was verified in three different environments and compared with LIO-SAM.

Method to Improve Localization and Mapping Accuracy on the Urban Road Using GPS, Monocular Camera and HD Map (GPS와 단안카메라, HD Map을 이용한 도심 도로상에서의 위치측정 및 맵핑 정확도 향상 방안)

  • Kim, Young-Hun;Kim, Jae-Myeong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1095-1109
    • /
    • 2021
  • The technology used to recognize the location and surroundings of autonomous vehicles is called SLAM. SLAM standsfor Simultaneously Localization and Mapping and hasrecently been actively utilized in research on autonomous vehicles,starting with robotic research. Expensive GPS, INS, LiDAR, RADAR, and Wheel Odometry allow precise magnetic positioning and mapping in centimeters. However, if it can secure similar accuracy as using cheaper Cameras and GPS data, it will contribute to advancing the era of autonomous driving. In this paper, we present a method for converging monocular camera with RTK-enabled GPS data to perform RMSE 33.7 cm localization and mapping on the urban road.