• Title/Summary/Keyword: LiDAR system

Search Result 269, Processing Time 0.027 seconds

3-Dimensional Building Reconstruction with Airborne LiDAR Data

  • Lee, Dong-Cheon;Yom, Jae-Hong;Kwon, Jay-Hyoun;We, Gwang-Jae
    • Korean Journal of Geomatics
    • /
    • v.2 no.2
    • /
    • pp.123-130
    • /
    • 2002
  • LiDAR (Light Detection And Ranging) system has a profound impact on geoinformatics. The laser mapping system is now recognized as being a viable system to produce the digital surface model rapidly and efficiently. Indeed the number of its applications and users has grown at a surprising rate in recent years. Interest is now focused on the reconstruction of buildings in urban areas from LiDAR data. Although with present technology objects can be extracted and reconstructed automatically using LiDAR data, the quality issue of the results is still major concern in terms of geometric accuracy. It would be enormously beneficial to the geoinformatics industry if geometrically accurate modeling of topographic surface including man-made objects could be produced automatically. The objectives of this study are to reconstruct buildings using airborne LiDAR data and to evaluate accuracy of the result. In these regards, firstly systematic errors involved with ALS (Airborne Laser Scanning) system are introduced. Secondly, the overall LiDAR data quality was estimated based on the ground check points, then classifying the laser points was performed. In this study, buildings were reconstructed from the classified as building laser point clouds. The most likely planar surfaces were estimated by the least-square method using the laser points classified as being planes. Intersecting lines of the planes were then computed and these were defined as the building boundaries. Finally, quality of the reconstructed building was evaluated.

  • PDF

Timing Jitter Analysis and Improvement Method using Single-Shot LiDAR system (Single-Shot LiDAR system을 이용한 Timing Jitter 분석 및 개선 방안)

  • Han, Mun-hyun;Choi, Gyu-dong;Song, Min-hyup;Seo, Hong-seok;Mheen, Bong-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.172-175
    • /
    • 2016
  • Time of Flight(ToF) LiDAR(Light Detection And Ranging) technology has been used for distance measurement and object detection by measuring ToF time information. This technology has been evolved into higher precision measurement field such like autonomous driving car and terrain analysis since the retrieval of exact ToF time information is of prime importance. In this paper, as a accuracy indicator of the ToF time information, timing jitter was measured and analyzed through Single-Shot LiDAR system(SSLs) mainly consisting of 1.5um wavelength MOPA LASER, InGaAs Avalanche Photodiode(APD) at 31M free space environment. Additionally, we applied spline interpolation and multiple-shot averaging method on measured data through SSLs to improve ToF timing information.

  • PDF

3D Reconstruction of Structure Fusion-Based on UAS and Terrestrial LiDAR (UAS 및 지상 LiDAR 융합기반 건축물의 3D 재현)

  • Han, Seung-Hee;Kang, Joon-Oh;Oh, Seong-Jong;Lee, Yong-Chang
    • Journal of Urban Science
    • /
    • v.7 no.2
    • /
    • pp.53-60
    • /
    • 2018
  • Digital Twin is a technology that creates a photocopy of real-world objects on a computer and analyzes the past and present operational status by fusing the structure, context, and operation of various physical systems with property information, and predicts the future society's countermeasures. In particular, 3D rendering technology (UAS, LiDAR, GNSS, etc.) is a core technology in digital twin. so, the research and application are actively performed in the industry in recent years. However, UAS (Unmanned Aerial System) and LiDAR (Light Detection And Ranging) have to be solved by compensating blind spot which is not reconstructed according to the object shape. In addition, the terrestrial LiDAR can acquire the point cloud of the object more precisely and quickly at a short distance, but a blind spot is generated at the upper part of the object, thereby imposing restrictions on the forward digital twin modeling. The UAS is capable of modeling a specific range of objects with high accuracy by using high resolution images at low altitudes, and has the advantage of generating a high density point group based on SfM (Structure-from-Motion) image analysis technology. However, It is relatively far from the target LiDAR than the terrestrial LiDAR, and it takes time to analyze the image. In particular, it is necessary to reduce the accuracy of the side part and compensate the blind spot. By re-optimizing it after fusion with UAS and Terrestrial LiDAR, the residual error of each modeling method was compensated and the mutual correction result was obtained. The accuracy of fusion-based 3D model is less than 1cm and it is expected to be useful for digital twin construction.

SYNTHESIS OF STEREO-MATE THROUGH THE FUSION OF A SINGLE AERIAL PHOTO AND LIDAR DATA

  • Chang, Ho-Wook;Choi, Jae-Wan;Kim, Hye-Jin;Lee, Jae-Bin;Yu, Ki-Yun
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.508-511
    • /
    • 2006
  • Generally, stereo pair images are necessary for 3D viewing. In the absence of quality stereo-pair images, it is possible to synthesize a stereo-mate suitable for 3D viewing with a single image and a depth-map. In remote sensing, DEM is usually used as a depth-map. In this paper, LiDAR data was used instead of DEM to make a stereo pair from a single aerial photo. Each LiDAR point was assigned a brightness value from the original single image by registration of the image and LiDAR data. And then, imaginary exposure station and image plane were assumed. Finally, LiDAR points with already-assigned brightness values were back-projected to the imaginary plane for synthesis of a stereo-mate. The imaginary exposure station and image plane were determined to have only a horizontal shift from the original image's exposure station and plane. As a result, the stereo-mate synthesized in this paper fulfilled epipolar geometry and yielded easily-perceivable 3D viewing effect together with the original image. The 3D viewing effect was tested with anaglyph at the end.

  • PDF

Educational Indoor Autonomous Mobile Robot System Using a LiDAR and a RGB-D Camera (라이다와 RGB-D 카메라를 이용하는 교육용 실내 자율 주행 로봇 시스템)

  • Lee, Soo-Young;Kim, Jae-Young;Cho, Se-Hyoung;Shin, Chang-yong
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.44-52
    • /
    • 2019
  • We implement an educational indoor autonomous mobile robot system that integrates LiDAR sensing information with RGB-D camera image information and exploits the integrated information. This system uses the existing sensing method employing a LiDAR with a small number of scan channels to acquire LiDAR sensing information. To remedy the weakness of the existing LiDAR sensing method, we propose the 3D structure recognition technique using depth images from a RGB-D camera and the deep learning based object recognition algorithm and apply the proposed technique to the system.

Development of Left Turn Response System Based on LiDAR for Traffic Signal Control

  • Park, Jeong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.181-190
    • /
    • 2022
  • In this paper, we use a LiDAR sensor and an image camera to detect a left-turning waiting vehicle in two ways, unlike the existing image-type or loop-type left-turn detection system, and a left-turn traffic signal corresponding to the waiting length of the left-turning lane. A system that can efficiently assign a system is introduced. For the LiDAR signal transmitted and received by the LiDAR sensor, the left-turn waiting vehicle is detected in real time, and the image by the video camera is analyzed in real time or at regular intervals, thereby reducing unnecessary computational processing and enabling real-time sensitive processing. As a result of performing a performance test for 5 hours every day for one week with an intersection simulation using an actual signal processor, a detection rate of 99.9%, which was improved by 3% to 5% compared to the existing method, was recorded. The advantage is that 99.9% of vehicles waiting to turn left are detected by the LiDAR sensor, and even if an intentional omission of detection occurs, an immediate response is possible through self-correction using the video, so the excessive waiting time of vehicles waiting to turn left is controlled by all lanes in the intersection. was able to guide the flow of traffic smoothly. In addition, when applied to an intersection in the outskirts of which left-turning vehicles are rare, service reliability and efficiency can be improved by reducing unnecessary signal costs.

Extraction of 3D Objects Around Roads Using MMS LiDAR Data (MMS LiDAR 자료를 이용한 도로 주변 3차원 객체 추출)

  • CHOUNG, Yun-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.152-161
    • /
    • 2017
  • Making precise 3D maps using Mobile Mapping System (MMS) sensors are essential for the development of self-driving cars. This paper conducts research on the extraction of 3D objects around the roads using the point cloud acquired by the MMS Light Detection and Ranging (LiDAR) sensor through the following steps. First, the digital surface model (DSM) is generated using MMS LiDAR data, and then the slope map is generated from the DSM. Next, the 3D objects around the roads are identified using the slope information. Finally, 97% of the 3D objects around the roads are extracted using the morphological filtering technique. This research contributes a plan for the application of automated driving technology by extracting the 3D objects around the roads using spatial information data acquired by the MMS sensor.

Analysis of Factors Influencing the Measurement Error of Ground-based LiDAR (지상기반 라이다의 측정 오차에 영향을 미치는 요인 분석)

  • Kang, Dong-Bum;Huh, Jong-Chul;Ko, Kyung-Nam
    • Journal of the Korean Solar Energy Society
    • /
    • v.37 no.6
    • /
    • pp.25-37
    • /
    • 2017
  • A study on factors influencing measurement error of Ground-based LiDAR(Light Detection And Ranging) system was conducted in Kimnyeong wind turbine test site on Jeju Island. Three properties of wind including inclined angle, turbulence intensity and power law exponent were taken into account as factors influencing the measurement error of Ground-based LiDAR. In order to calculate LiDAR measurements error, 2.5-month wind speed data collected from LiDAR (WindCube v2) were compared with concurrent data from the anemometer on a nearby 120m-high meteorological mast. In addition, data filtering was performed and its filtering criteria was based on the findings at previous researches. As a result, at 100m above ground level, absolute LiDAR error rate with absolute inclined angle showed 4.58~13.40% and 0.77 of the coefficients of determination, $R^2$. That with turbulence intensity showed 3.58~23.94% and 0.93 of $R^2$ while that with power law exponent showed 4.71~9.53% and 0.41 of $R^2$. Therefore, it was confirmed that the LiDAR measurement error was highly affected by inclined angle and turbulence intensity, while that did not much depend on power law exponent.

Development of SWIR 3D Lidar System with Low Optical Power Using 1 Channel Single Photon Detector (1채널 단일광자검출기를 이용한 낮은 광출력의 SWIR(Short Wave Infrared) 3D 라이다 시스템 개발)

  • Kwon, Oh-Soung;Lee, Seung-Pil;Shin, Seung-Min;Park, Min-Young;Ban, Chang-Woo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.6_3
    • /
    • pp.1147-1154
    • /
    • 2022
  • Now that the development of autonomous driving is progressing, LiDAR has become an indispensable element. However, LiDAR is a device that uses lasers, and laser side effects may occur. One of them is the much-talked-about eye-safety, and developers have been satisfying this through laser characteristics and operation methods. But eye-safety is just one of the problems lasers pose. For example, irradiating a laser with a specific energy level or higher in a dusty environment can cause deterioration of the dust particles, leading to a sudden explosion. For this reason, the dust ignition proof regulations clearly state that "a source with a pulse period of less than 5 seconds is considered a continuous light source, and the average energy does not exceed 5 mJ/mm 2 or 35 mW" [2]. Energy of output optical power is limited by the law. In this way, the manufacturer cannot define the usage environment of the LiDAR, and the development of a LiDAR that can be used in such an environment can increase the ripple effect in terms of use in application fields using the LiDAR. In this paper, we develop a LiDAR with low optical power that can be used in environments where high power lasers can cause problems, evaluate its performance. Also, we discuss and present one of the directions for the development of LiDAR with laser power limited by dust ignition proof regulations.

Loosely Coupled LiDAR-visual Mapping and Navigation of AMR in Logistic Environments (실내 물류 환경에서 라이다-카메라 약결합 기반 맵핑 및 위치인식과 네비게이션 방법)

  • Choi, Byunghee;Kang, Gyeongsu;Roh, Yejin;Cho, Younggun
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.4
    • /
    • pp.397-406
    • /
    • 2022
  • This paper presents an autonomous mobile robot (AMR) system and operation algorithms for logistic and factory facilities without magnet-lines installation. Unlike widely used AMR systems, we propose an EKF-based loosely coupled fusion of LiDAR measurements and visual markers. Our method first constructs occupancy grid and visual marker map in the mapping process and utilizes prebuilt maps for precise localization. Also, we developed a waypoint-based navigation pipeline for robust autonomous operation in unconstrained environments. The proposed system estimates the robot pose using by updating the state with the fusion of visual marker and LiDAR measurements. Finally, we tested the proposed method in indoor environments and existing factory facilities for evaluation. In experimental results, this paper represents the performance of our system compared to the well-known LiDAR-based localization and navigation system.