• 제목/요약/키워드: LIDAR Data

검색결과 340건 처리시간 0.027초

Reconstruction of Buildings from Satellite Image and LIDAR Data

  • Guo, T.;Yasuoka, Y.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.519-521
    • /
    • 2003
  • Within the paper an approach for the automatic extraction and reconstruction of buildings in urban built-up areas base on fusion of high-resolution satellite image and LIDAR data is presented. The presented data fusion scheme is essentially motivated by the fact that image and range data are quite complementary. Raised urban objects are first segmented from the terrain surface in the LIDAR data by making use of the spectral signature derived from satellite image, afterwards building potential regions are initially detected in a hierarchical scheme. A novel 3D building reconstruction model is also presented based on the assumption that most buildings can be approximately decomposed into polyhedral patches. With the constraints of presented building model, 3D edges are used to generate the hypothesis and follow the verification processes and a subsequent logical processing of the primitive geometric patches leads to 3D reconstruction of buildings with good details of shape. The approach is applied on the test sites and shows a good performance, an evaluation is described as well in the paper.

  • PDF

3차원 거리정보와 DSM의 정사윤곽선 영상 정합을 이용한 무인이동로봇의 위치인식 (Localization of Unmanned Ground Vehicle based on Matching of Ortho-edge Images of 3D Range Data and DSM)

  • 박순용;최성인
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제1권1호
    • /
    • pp.43-54
    • /
    • 2012
  • 본 논문에서는 야지 환경에서 동작하는 무인이동로봇에서 획득한 3차원 LIDAR (Light Detection and Ranging) 센서 정보와 로봇이 이동하는 지형의 3차원 DSM (Digital Surface Map)에서 정사윤곽선(Ortho-edge) 특징영상을 생성하고 정합하여 로봇의 현재 위치를 추정하는 기술을 제안한다. 최근의 무인이동로봇의 위치 인식에 대한연구는 GPS (Global Positioning System), IMU (Inertial Measurement Unit), LIDAR 등의 위치인식 센서를 융합하는 경우가 많아지고 있다. 특히 LIDAR에서 획득한 거리정보를 ICP(Iterative Closest Point) 기반의 기하정합으로 로봇의 위치를 추정하는 기술이 개발되고 있다. 그러나 이동로봇에서 획득한 센서 정보는 DSM의 센싱 방향과 큰 차이차이가 있어 기존의 기하정합 기술을 사용하는데 어려움이 있다. 본 논문에서는 서로 다른 센싱 방향에서 획득한 3차원 LIDAR 거리정보와 DSM에서 정사윤곽선이라는 특징 영상을 생성하고 이들을 정합하여 로봇의 위치를 추정하는 새로운 기술을 제안한다. DSM으로부터 현재 시점의 정사윤곽선 영상을 생성하는 방법, 전방향 LIDAR 거리센서에서 정사윤곽선 영상을 생성하는 방법, 그리고 정사윤곽선 영상의 정합 기술을 설명하였다. 실험에서는 다양한 주행 경로에 대한 위치 추정의 오차를 분석하고 제안 기술의 성능의 우수성을 보였다.

해상기상탑과 윈드 라이다의 높이별 풍황관측자료 비교 (A Comparison of Offshore Met-mast and Lidar Wind Measurements at Various Heights)

  • 김지영;김민석
    • 한국해안·해양공학회논문집
    • /
    • 제29권1호
    • /
    • pp.12-19
    • /
    • 2017
  • 풍력 개발을 위한 해상기상탑은 초기 설치비와 유지보수비가 크기 때문에 윈드 라이다와 같은 원격관측장비를 이용하여 기상탑을 대체할 필요가 있다. 본 연구에서는 해상기상탑에서 윈드 라이다를 동시 운영하고 수집된 풍속 및 풍향의 관측결과를 상호 비교하여 윈드 라이다의 적용성을 검증하였다. 높이별 풍속 및 풍향 관측결과 두 자료간의 크기 및 경향 등의 통계적 특성 차이는 거의 없으며, 기상탑 관측자료는 구조물 차폐영향에 의한 오차가 발생하는 반면, 윈드 라이다는 오차가 없는 보다 정확한 자료를 얻을 수 있는 것을 확인하였다.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • 대한원격탐사학회지
    • /
    • 제25권6호
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

LIDAR와 Split-FX 소프트웨어를 이용한 암반 절리면의 자동추출과 절리의 특성 분석 (Automatic Extraction of Fractures and Their Characteristics in Rock Masses by LIDAR System and the Split-FX Software)

  • 김치환
    • 터널과지하공간
    • /
    • 제19권1호
    • /
    • pp.1-10
    • /
    • 2009
  • 암반 내 구조물을 시공하는 경우 역학적 안정성을 평가하기 위하여 암반의 특성을 조사한다. 이 경우 암반의 특성은 주로 암반 내 절리의 특성에 의하여 좌우된다. 지금까지는 암반 내 절리의 특성을 조사하기 위하여 암반이 노출된 사면이나 노두에 접근하고 육안으로 직접 관찰하였다. 이때 급사면과 같은 곳에서 접근의 문제, 작업의 안전 문제, 많은 시판이 걸리는 문제, 조사시간에 비하여 얻은 정보량의 부족, 정보의 재현 문제, 측정 오차 문제 등의 제한이 있었다. 따라서 이와 같은 문제를 개선하기 위하여 LIDAR (light detection and ranging)로 암반을 스캔하여 얻은 포인트 클라우드(point cloud)글 Split-FX 소프트웨어로 처리한 결과 절기의 방향과 간격 및 절리면의 거칠기 등 절리의 특성을 정확하고 효율적으로 분석할 수 있었다.

Footprint extraction of urban buildings with LIDAR data

  • Kanniah, Kasturi Devi;Gunaratnam, Kasturi;Mohd, Mohd Ibrahim Seeni
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.113-119
    • /
    • 2003
  • Building information is extremely important for many applications within the urban environment. Sufficient techniques and user-friendly tools for information extraction from remotely sensed imagery are urgently needed. This paper presents an automatic and manual approach for extracting footprints of buildings in urban areas from airborne Light Detection and Ranging (LIDAR) data. First a digital surface model (DSM) was generated from the LIDAR point data. Then, objects higher than the ground surface are extracted using the generated DSM. Based on general knowledge on the study area and field visits, buildings were separated from other objects. The automatic technique for extracting the building footprints was based on different window sizes and different values of image add backs, while the manual technique was based on image segmentation. A comparison was then made to see how precise the two techniques are in detecting and extracting building footprints. Finally, the results were compared with manually digitized building reference data to conduct an accuracy assessment and the result shows that LIDAR data provide a better shape characterization of each buildings.

  • PDF

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Building Extraction from Lidar Data and Aerial Imagery using Domain Knowledge about Building Structures

  • Seo, Su-Young
    • 대한원격탐사학회지
    • /
    • 제23권3호
    • /
    • pp.199-209
    • /
    • 2007
  • Traditionally, aerial images have been used as main sources for compiling topographic maps. In recent years, lidar data has been exploited as another type of mapping data. Regarding their performances, aerial imagery has the ability to delineate object boundaries but omits much of these boundaries during feature extraction. Lidar provides direct information about heights of object surfaces but have limitations with respect to boundary localization. Considering the characteristics of the sensors, this paper proposes an approach to extracting buildings from lidar and aerial imagery, which is based on the complementary characteristics of optical and range sensors. For detecting building regions, relationships among elevation contours are represented into directional graphs and searched for the contours corresponding to external boundaries of buildings. For generating building models, a wing model is proposed to assemble roof surface patches into a complete building model. Then, building models are projected and checked with features in aerial images. Experimental results show that the proposed approach provides an efficient and accurate way to extract building models.

A Study on Automatic Extraction of Buildings Using LIDAR with Aerial Imagery

  • Lee, Young-Jin;Cho, Woo-Sug;Jeong, Soo;Kim, Kyung-Ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.241-243
    • /
    • 2003
  • This paper presents an algorithm that automatically extracts buildings among many different features on the earth surface by fusing LIDAR data with panchromatic aerial images. The proposed algorithm consists of three stages such as point level process, polygon level process, parameter space level process. At the first stage, we eliminate gross errors and apply a local maxima filter to detect building candidate points from the raw laser scanning data. After then, a grouping procedure is performed for segmenting raw LIDAR data and the segmented LIDAR data is polygonized by the encasing polygon algorithm developed in the research. At the second stage, we eliminate non-building polygons using several constraints such as area and circularity. At the last stage, all the polygons generated at the second stage are projected onto the aerial stereo images through collinearity condition equations. Finally, we fuse the projected encasing polygons with edges detected by image processing for refining the building segments. The experimental results showed that the RMSEs of building corners in X, Y and Z were ${\pm}$8.1㎝, ${\pm}$24.7㎝, ${\pm}$35.9㎝, respectively.

  • PDF

높은 이상점 비율을 갖는 고감도 가이거모드 영상 라이다 데이터로부터 이상점 검출 (Outlier Detection from High Sensitive Geiger Mode Imaging LIDAR Data retaining a High Outlier Ratio)

  • 김성준;이임평;이영철;조민식
    • 대한원격탐사학회지
    • /
    • 제28권5호
    • /
    • pp.573-586
    • /
    • 2012
  • 라이다 센서로 취득된 점군에는 실제 물리적인 표면에 존재하지 않는 이상점이 포함되어 있다. 이러한 이상점들은 활용을 위한 후속처리를 하기 전에 반드시 제거되어야 한다. 특히 민감도가 아주 높은 가이거 모드 검출기를 이용하는 라이다로 취득한 데이터는 높은 비율의 이상점을 포함하고 있다. 이로 인해 기존의 알고리즘은 이러한 데이터로부터 성공적으로 이상점을 검출하는데 어려움이 있었다. 이에 본 연구는 가이거 모드 영상 라이다로 획득된 높은 이상점 비율을 갖는 점군에서 이상점을 제거하는 방법을 제안한다. 제안된 방법은 의미 있는 표적의 표면은 검출기상에서 두 개 이상의 이웃픽셀에 검출되며, 이러한 이웃픽셀들로부터 출력되는 거리값은 유사하다는 점을 이용한다. 개발된 제거 기법은 시뮬레이션으로 생성된 다양한 점밀도와 이상점 비율의 모의 데이터에 적용하여 임계값과 데이터 특성에 따른 성능을 분석하였다. 대부분의 경우에 약 99% 이상의 이상점 검출성능이 나타났으며, 데이터 특성에 강인하고 임계값에 크게 민감하지 않는 검출성능을 확인하였다. 제안된 방법은 향후 가이거 모드 라이다 데이터의 온라인 실시간 처리 또는 후처리에 효과적으로 활용될 것으로 판단된다.