• Title/Summary/Keyword: LIDAR sensor

Search Result 108, Processing Time 0.023 seconds

A study on the alignment of different sensor data with areial images and lidar data (항공영상과 라이다 자료를 이용한 이종센서 자료간의 alignment에 관한 연구)

  • 곽태석;이재빈;조현기;김용일
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2004.11a
    • /
    • pp.257-262
    • /
    • 2004
  • The purpose of data fusion is collecting maximized information from combining the data attained from more than two same or different kind sensor systems. Data fusion of same kind sensor systems like optical imagery has been on focus, but recently, LIDAR emerged as a new technology for capturing rapidally data on physical surfaces and the high accuray results derived from the LIDAR data. Considering the nature of aerial imagery and LIDAR data, it is clear that the two systems provide complementary information. Data fusion is consisted of two steps, alignment and matching. However, the complementary information can only be fully utilized after sucessful alignment of the aerial imagery and lidar data. In this research, deal with centroid of building extracted from lidar data as control information for estimating exterior orientation parameters of aerial imagery relative to the LIDAR reference frame.

  • PDF

Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity (라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘)

  • Jung, Sangwoo;Jung, Minwoo;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.

Error Analysis and Modeling of Airborne LIDAR System (항공라이다시스템의 오차분석 및 모델링)

  • Yoo Byoung-Min;Lee Im-Pyeong;Kim Seong-Joon;Kang In-Ku
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.199-204
    • /
    • 2006
  • Airborne LIDAR systems have been increasingly used for various applications as an effective surveying mean that can be complementary or alternative to the traditional one based on aerial photos. A LIDAR system is a multi-sensor system consisting of GPS, INS, and a laser scanner and hence the errors associated with the LIDAR data can be significantly affected by not only the errors associated with each individual sensor but also the errors involved in combining these sensors. The analysis about these errors have been performed by some researchers but yet insufficient so that the results can be critically contributed to performing accurate calibration of LIDAR data. In this study, we thus analyze these error sources, derive their mathematical models and perform the sensitivity analysis to assess how significantly each error affects the LIDAR data. The results from this sensitivity analysis in particular can be effectively used to determine the main parameters modelling the systematic errors associated with the LIDAR data for their calibration.

  • PDF

A Low-Cost Lidar Sensor based Glass Feature Extraction Method for an Accurate Map Representation using Statistical Moments (통계적 모멘트를 이용한 정확한 환경 지도 표현을 위한 저가 라이다 센서 기반 유리 특징점 추출 기법)

  • An, Ye Chan;Lee, Seung Hwan
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.103-111
    • /
    • 2021
  • This study addresses a low-cost lidar sensor-based glass feature extraction method for an accurate map representation using statistical moments, i.e. the mean and variance. Since the low-cost lidar sensor produces range-only data without intensity and multi-echo data, there are some difficulties in detecting glass-like objects. In this study, a principle that an incidence angle of a ray emitted from the lidar with respect to a glass surface is close to zero degrees is concerned for glass detection. Besides, all sensor data are preprocessed and clustered, which is represented using statistical moments as glass feature candidates. Glass features are selected among the candidates according to several conditions based on the principle and geometric relation in the global coordinate system. The accumulated glass features are classified according to the distance, which is lastly represented on the map. Several experiments were conducted in glass environments. The results showed that the proposed method accurately extracted and represented glass windows using proper parameters. The parameters were empirically designed and carefully analyzed. In future work, we will implement and perform the conventional SLAM algorithms combined with our glass feature extraction method in glass environments.

Error Correction Technique of Distance Measurement for ToF LIDAR Sensor

  • Moon, Yeon-Kug;Shim, Young Bo;Song, Hyoung-Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.960-973
    • /
    • 2018
  • This paper presents design for error correcting algorithm of the time of flight (ToF) detection value in the light detection and ranging (LIDAR) system sensor. The walk error of ToF value is generated by change of the received signal power depending on distance between the LIDAR sensor and object. The proposed method efficiently compensates the ToF value error by the independent ToF value calculation from the received signal using both rising point and falling point. A constant error of ~0.05 m is obtained after the walk error correction while an increasing error up to ~1 m is obtained with conventional method.

A Development of Effective Object Detection System Using Multi-Device LiDAR Sensor in Vehicle Driving Environment (차량주행 환경에서 다중라이다센서를 이용한 효과적인 검출 시스템 개발)

  • Kwon, Jin-San;Kim, Dong-Sun;Hwang, Tae-Ho;Park, Hyun-Moon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.2
    • /
    • pp.313-320
    • /
    • 2018
  • The importance of sensors on a self-driving vehicle has rising since it act as eyes for the vehicle. Lidar sensors based on laser technology tend to yield better image quality with more laser channels, thus, it has higher detection accuracy for obstacles, pedistrians, terrain, and other vechicles. However, incorporating more laser channels results higher unit price more than ten times, and this is a major drawback for using high channel lidar sensors on a vehicle for actual consumer market. To come up with this drawback, we propose a method of integrating multiple low channel, low cost lidar sensors acting as one high channel sensor. The result uses four 16 channels lidar sensors with small form factor acting as one bulky 64 channels sensor, which in turn, improves vehicles cosmetic aspects and helps widespread of using the lidar technology for the market.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Automatic Building Extraction Using LIDAR and Aerial Image (LIDAR 데이터와 수치항공사진을 이용한 건물 자동추출)

  • Jeong, Jae-Wook;Jang, Hwi-Jeong;Kim, Yu-Seok;Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.59-67
    • /
    • 2005
  • Building information is primary source in many applications such as mapping, telecommunication, car navigation and virtual city modeling. While aerial CCD images which are captured by passive sensor(digital camera) provide horizontal positioning in high accuracy, it is far difficult to process them in automatic fashion due to their inherent properties such as perspective projection and occlusion. On the other hand, LIDAR system offers 3D information about each surface rapidly and accurately in the form of irregularly distributed point clouds. Contrary to the optical images, it is much difficult to obtain semantic information such as building boundary and object segmentation. Photogrammetry and LIDAR have their own major advantages and drawbacks for reconstructing earth surfaces. The purpose of this investigation is to automatically obtain spatial information of 3D buildings by fusing LIDAR data with aerial CCD image. The experimental results show that most of the complex buildings are efficiently extracted by the proposed method and signalize that fusing LIDAR data and aerial CCD image improves feasibility of the automatic detection and extraction of buildings in automatic fashion.

  • PDF

Implementation of an Obstacle Avoidance System Based on a Low-cost LiDAR Sensor for Autonomous Navigation of an Unmanned Ship (무인선박의 자율운항을 위한 저가형 LiDAR센서 기반의 장애물 회피 시스템 구현)

  • Song, HyunWoo;Lee, Kwangkook;Kim, Dong Hun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.68 no.3
    • /
    • pp.480-488
    • /
    • 2019
  • In this paper, we propose an obstacle avoidance system for an unmanned ship to navigate safely in dynamic environments. Also, in this paper, one-dimensional low-cost lidar sensor is used, and a servo motor is used to implement the lidar sensor in a two-dimensional space. The distance and direction of an obstacle are measured through the two-dimensional lidar sensor. The unmanned ship is controlled by the application at a Tablet PC. The user inputs the coordinates of the destination in Google maps. Then the position of the unmanned ship is compared with the position of the destination through GPS and a geomagnetic sensor. If the unmanned ship finds obstacles while moving to its destination, it avoids obstacles through a fuzzy control-based algorithm. The paper shows that the experimental results can effectively construct an obstacle avoidance system for an unmanned ship with a low-cost LiDAR sensor using fuzzy control.

TROPICAL TREE MORPHOLOGY USING AIRBORNE LIDAR DATA

  • JANG, Jae-Dong;Yoon, Hong-Joo
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.676-679
    • /
    • 2006
  • Mangrove crowns were delineated using active sensor LIDAR (LIght Detection And Ranging) data by a crown delineating model developed in this study. LIDAR data were acquired from airborne survey by a helicopter for the estuary of Macouria in the northeast coast of French Guiana. The canopy height image was derived from LIDAR vector data by calculating the difference between ground and non-ground data. The mangrove site in the study area was classified to three sectors by the time of mangrove settlement; Mangrove 1986, 2002 and 2003. The estimated crown of Mangrove 1986 was reliable defined for their size, number and volume because of larger crown size and bigger variation of crown height. The tree crown size of Mangrove 2002 and 2003 by the model was overestimated and the number of trees was much underestimated. The estimated crown was not for single crown but a crown group due to homogenous crown height and spatial resolution of LIDAR data. However the canopy height image derived from LIDAR data provided three-dimensional information of mangroves.

  • PDF