• Title/Summary/Keyword: 3D LiDAR sensor

Search Result 54, Processing Time 0.026 seconds

Key Point Extraction from LiDAR Data for 3D Modeling (3차원 모델링을 위한 라이다 데이터로부터 특징점 추출 방법)

  • Lee, Dae Geon;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.479-493
    • /
    • 2016
  • LiDAR(Light Detection and Ranging) data acquired from ALS(Airborne Laser Scanner) has been intensively utilized to reconstruct object models. Especially, researches for 3D modeling from LiDAR data have been performed to establish high quality spatial information such as precise 3D city models and true orthoimages efficiently. To reconstruct object models from irregularly distributed LiDAR point clouds, sensor calibration, noise removal, filtering to separate objects from ground surfaces are required as pre-processing. Classification and segmentation based on geometric homogeneity of the features, grouping and representation of the segmented surfaces, topological analysis of the surface patches for modeling, and accuracy assessment are accompanied by modeling procedure. While many modeling methods are based on the segmentation process, this paper proposed to extract key points directly for building modeling without segmentation. The method was applied to simulated and real data sets with various roof shapes. The results demonstrate feasibility of the proposed method through the accuracy analysis.

A LiDAR-based Visual Sensor System for Automatic Mooring of a Ship (선박 자동계류를 위한 LiDAR기반 시각센서 시스템 개발)

  • Kim, Jin-Man;Nam, Taek-Kun;Kim, Heon-Hui
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.6
    • /
    • pp.1036-1043
    • /
    • 2022
  • This paper discusses about the development of a visual sensor that can be installed in an automatic mooring device to detect the berthing condition of a vessel. Despite controlling the ship's speed and confirming its location to prevent accidents while berthing a vessel, ship collision occurs at the pier every year, causing great economic and environmental damage. Therefore, it is important to develop a visual system that can quickly obtain the information on the speed and location of the vessel to ensure safety of the berthing vessel. In this study, a visual sensor was developed to observe a ship through an image while berthing, and to properly check the ship's status according to the surrounding environment. To obtain the adequacy of the visual sensor to be developed, the sensor characteristics were analyzed in terms of information provided from the existing sensors, that is, detection range, real-timeness, accuracy, and precision. Based on these analysis data, we developed a 3D visual module that can acquire information on objects in real time by conducting conceptual designs of LiDAR (Light Detection And Ranging) type 3D visual system, driving mechanism, and position and force controller for motion tilting system. Finally, performance evaluation of the control system and scan speed test were executed, and the effectiveness of the developed system was confirmed through experiments.

EMOS: Enhanced moving object detection and classification via sensor fusion and noise filtering

  • Dongjin Lee;Seung-Jun Han;Kyoung-Wook Min;Jungdan Choi;Cheong Hee Park
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.847-861
    • /
    • 2023
  • Dynamic object detection is essential for ensuring safe and reliable autonomous driving. Recently, light detection and ranging (LiDAR)-based object detection has been introduced and shown excellent performance on various benchmarks. Although LiDAR sensors have excellent accuracy in estimating distance, they lack texture or color information and have a lower resolution than conventional cameras. In addition, performance degradation occurs when a LiDAR-based object detection model is applied to different driving environments or when sensors from different LiDAR manufacturers are utilized owing to the domain gap phenomenon. To address these issues, a sensor-fusion-based object detection and classification method is proposed. The proposed method operates in real time, making it suitable for integration into autonomous vehicles. It performs well on our custom dataset and on publicly available datasets, demonstrating its effectiveness in real-world road environments. In addition, we will make available a novel three-dimensional moving object detection dataset called ETRI 3D MOD.

A Study on the Method of Non-Standard Cargo Volume Calculation Based on LiDar Sensor for Cargo Loading Optimization (화물 선적 최적화를 위한 LiDar 센서 기반 비규격 화물 체적산출 방법 연구)

  • Jeon, Young Joon;Kim, Ye Seul;Ahn, Sun Kyu;Jeong, Seok Chan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.4
    • /
    • pp.559-567
    • /
    • 2022
  • The optimal shipping location is determined by measuring the volume and weights of cargo shipped to non-standard cargo carriers. Currently, workers manually measure cargo volume, but automate it to improve work inefficiency. In this paper, we proposed the method of a real-time volume calculation using LiDar sensor for automating cargo measurement of non-standard cargo. For this purpose, we utilized the statistical techniques for data preprocessing and volume calculation, also used Voxel Grid filter to light weighted of data which are appropriate in real-time calculation. We implemented the function of Normal vectors and Triangle Mesh to generate surfaces and Alpha Shapes algorithms to process 3D modeling.

3D Stereo Display of Spatial Data from Various Sensors (다양한 센서로부터 획득한 공간데이터의 3D 입체 디스플레이)

  • Park, So-Young;Yun, Seong-Goo;Lee, Young-Wook;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.669-676
    • /
    • 2010
  • Visualization requires for effective analysis of the spatial data collected by various sensors. The best way to convey 3D digital spatial information which is modeling of the real world to the users, realistic 3D visualization and display technology. Since most of the display is based on 2D or 2.5D projection to the plane, there is limitation in representing real world in 3D space. In this paper, data from airborne LiDAR for topographic mapping, Flashi-LiDAR as emerging sensor with great potential to 3D data acquisition, and multibeam echo-sounder for underwater measurement, were stereoscopically visualized. 3D monitors are getting popular and could be information media and platform in geoinformatics. Therefore, study on creating 3D stereoscopic contents of spatial information is essential for new technology of stereo viewing systems.

A Study on the Application Technique of 3-D Spatial Information by integration of Aerial photos and Laser data (항공사진과 레이져 데이터의 통합에 의한 3 차원 공간정보 활용기술연구)

  • Yeon, Sang-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.385-392
    • /
    • 2010
  • A LiDAR technique has the merits that survey engineers can get a large number of measurements with high precision quickly. Aerial photos and satellite sensor images are used for generating 3D spatial images which are matched with the map coordinates and elevation data from digital topographic files. Also, those images are used for matching with 3D spatial image contents through perspective view condition composed along to the designated roads until arrival the corresponding location. Recently, 3D aviation image could be generated by various digital data. The advanced geographical methods for guidance of the destination road are experimented under the GIS environments. More information and access designated are guided by the multimedia contents on internet or from the public tour information desk using the simulation images. The height data based on LiDAR is transformed into DEM, and the real time unification of the vector via digital image mapping and raster via extract evaluation are transformed to trace the generated model of 3-dimensional downtown building along to the long distance for 3D tract model generation.

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF

Spherical Point Tracing for Synthetic Vehicle Data Generation with 3D LiDAR Point Cloud Data (3차원 LiDAR 점군 데이터에서의 가상 차량 데이터 생성을 위한 구면 점 추적 기법)

  • Sangjun Lee;Hakil Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.329-332
    • /
    • 2023
  • 3D Object Detection using deep neural network has been developed a lot for obstacle detection in autonomous vehicles because it can recognize not only the class of target object but also the distance from the object. But in the case of 3D Object Detection models, the detection performance for distant objects is lower than that for nearby objects, which is a critical issue for autonomous vehicles. In this paper, we introduce a technique that increases the performance of 3D object detection models, particularly in recognizing distant objects, by generating virtual 3D vehicle data and adding it to the dataset used for model training. We used a spherical point tracing method that leverages the characteristics of 3D LiDAR sensor data to create virtual vehicles that closely resemble real ones, and we demonstrated the validity of the virtual data by using it to improve recognition performance for objects at all distances in model training.

A New Object Region Detection and Classification Method using Multiple Sensors on the Driving Environment (다중 센서를 사용한 주행 환경에서의 객체 검출 및 분류 방법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1271-1281
    • /
    • 2017
  • It is essential to collect and analyze target information around the vehicle for autonomous driving of the vehicle. Based on the analysis, environmental information such as location and direction should be analyzed in real time to control the vehicle. In particular, obstruction or cutting of objects in the image must be handled to provide accurate information about the vehicle environment and to facilitate safe operation. In this paper, we propose a method to simultaneously generate 2D and 3D bounding box proposals using LiDAR Edge generated by filtering LiDAR sensor information. We classify the classes of each proposal by connecting them with Region-based Fully-Covolutional Networks (R-FCN), which is an object classifier based on Deep Learning, which uses two-dimensional images as inputs. Each 3D box is rearranged by using the class label and the subcategory information of each class to finally complete the 3D bounding box corresponding to the object. Because 3D bounding boxes are created in 3D space, object information such as space coordinates and object size can be obtained at once, and 2D bounding boxes associated with 3D boxes do not have problems such as occlusion.

Development of small multi-copter system for indoor collision avoidance flight (실내 비행용 소형 충돌회피 멀티콥터 시스템 개발)

  • Moon, Jung-Ho
    • Journal of Aerospace System Engineering
    • /
    • v.15 no.1
    • /
    • pp.102-110
    • /
    • 2021
  • Recently, multi-copters equipped with various collision avoidance sensors have been introduced to improve flight stability. LiDAR is used to recognize a three-dimensional position. Multiple cameras and real-time SLAM technology are also used to calculate the relative position to obstacles. A three-dimensional depth sensor with a small process and camera is also used. In this study, a small collision-avoidance multi-copter system capable of in-door flight was developed as a platform for the development of collision avoidance software technology. The multi-copter system was equipped with LiDAR, 3D depth sensor, and small image processing board. Object recognition and collision avoidance functions based on the YOLO algorithm were verified through flight tests. This paper deals with recent trends in drone collision avoidance technology, system design/manufacturing process, and flight test results.