• Title/Summary/Keyword: LIDAR-based

Search Result 226, Processing Time 0.028 seconds

Generation of 3D Campus Models using Multi-Sensor Data (다중센서데이터를 이용한 캠퍼스 3차원 모델의 구축)

  • Choi Kyoung-Ah;Kang Moon-Kwon;Shin Hyo-Sung;Lee Im-Pyeong
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.205-210
    • /
    • 2006
  • With the development of recent technology such as telematics, LBS, and ubiquitous, the applications of 3D GIS are rapidly increased. As 3D GIS is mainly based on urban models consisting of the realistic digital models of the objects existing in an urban area, demands for urban models and its continuous update is expected to be drastically increased. The purpose of this study is thus to propose more efficient and precise methods to construct urban models with its experimental verification. Applying the proposed methods, the terrain and sophisticated building models are constructed for the area of $270,600m^2$ with 23 buildings in the University of Seoul. For the terrain models, airborne imagery and LIDAR data is used, while the ground imagery is mainly used for the building models. It is found that the generated models reflect the correct geometry of the buildings and terrain surface. The textures of building surfaces, generated automatically using the projective transformation however, are not well-constructed because of being blotted out and shaded by objects such as trees, near buildings, and other obstacles. Consequently, the algorithms on the texture extraction should be improved to construct more realistic 3D models. Furthermore, the inside of buildings should be modeled for various potential applications in the future.

  • PDF

Implementation and Evaluation of a Robot Operating System-based Virtual Lidar Driver (로봇운영체제 기반의 가상 라이다 드라이버 구현 및 평가)

  • Hwang, Inho;Kim, Kanghee
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.10
    • /
    • pp.588-593
    • /
    • 2017
  • In this paper, we propose a LiDAR driver that virtualizes multiple inexpensive LiDARs (Light Detection and Ranging) with a smaller number of scan channels on an autonomous vehicle to replace a single expensive LiDAR with a larger number of scan channels. As a result, existing SLAM (Simultaneous Localization And Mapping) algorithms can be used with no modifications developed assuming a single LiDAR. In the paper, the proposed driver was implemented on the Robot Operating System and was evaluated with an existing SLAM algorithm. The results show that the proposed driver, combined with a filter to control the density of points in a 3D map, is compatible with the existing algorithm.

Mapping Vegetation Volume in Urban Environments by Fusing LiDAR and Multispectral Data

  • Jung, Jinha;Pijanowski, Bryan
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.661-670
    • /
    • 2012
  • Urban forests provide great ecosystem services to population in metropolitan areas even though they occupy little green space in a huge gray landscape. Unfortunately, urbanization inherently results in threatening the green infrastructure, and the recent urbanization trends drew great attention of scientists and policy makers on how to preserve or restore green infrastructure in metropolitan area. For this reason, mapping the spatial distribution of the green infrastructure is important in urban environments since the resulting map helps us identify hot green spots and set up long term plan on how to preserve or restore green infrastructure in urban environments. As a preliminary step for mapping green infrastructure utilizing multi-source remote sensing data in urban environments, the objective of this study is to map vegetation volume by fusing LiDAR and multispectral data in urban environments. Multispectral imageries are used to identify the two dimensional distribution of green infrastructure, while LiDAR data are utilized to characterize the vertical structure of the identified green structure. Vegetation volume was calculated over the metropolitan Chicago city area, and the vegetation volume was summarized over 16 NLCD classes. The experimental results indicated that vegetation volume varies greatly even in the same land cover class, and traditional land cover map based above ground biomass estimation approach may introduce bias in the estimation results.

A Study on the technique of DEM Generation from LiDAR Data (LIDAR 데이터를 이용한 DEM 생성 기법에 관한 연구)

  • Lee, Jeong-Ho;Yu, Ki-Yun
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.125-131
    • /
    • 2004
  • LiDAR 데이터의 필터링은 원 데이터로부터 건물, 수목 등과 같은 비지면점을 제거하는 과정이며, 이러한 필터링을 통해 DEM을 생성할 수 있다. 대표적인 필터링 방법들로는 분산을 이용한 linear prediction 기법, 주변 점들과의 경사관계를 이용한 slope-based 기법, morphology 필터, local maxima 필터 등이 있으며 이러한 기존의 기법들의 단점을 보완하기 위한 연구가 활발히 진행되고 있다. 대부분의 필터링 기법들은 필터의 크기(윈도우의 크기)와 같은 인자를 대상 지역에 적합하게 사용자가 직접 설정해주어야 한다. 더욱이 복잡한 지형, 지물이 존재하는 지역에 적용하기 위해서는 인자를 변형시켜줘야 하며 특히, 다양한 크기의 건물이 존재하는 지역에 대하여 적용하기 위해서는 가변적인 크기의 필터가 필요하다. 이에 본 논문에서는 다양한 크기의 건물이 존재하는 지역에 대하여 필터의 크기를 변화시키지 않고 필터링을 수행할 수 있는 연산기법을 제안하였다. 본 연구에서는 수목이나 자동차 등과 같은 작은 개체의 제거를 위해 고정된 작은 크기의 윈도우를 가지는 모폴로지 필터를 우선 적용한다. 그 후 건물과 같은 큰 개체의 포인트는 이웃 포인트와의 고도차이를 이용하여 인식하고 이웃에 위치하는 지면 포인트로 대체하며, 갱신된 값이 바로 다음 연산에 반영 되도록 한다. 또한 상, 하, 좌, 우 네 방향에 대하여 라인별로 독립된 연산을 수행한 후에 이들을 비교함으로써 오차를 보정한다.

  • PDF

Field Experiment of a LiDAR Sensor-based Small Autonomous Driving Robot in an Underground Mine (라이다 센서 기반 소형 자율주행 로봇의 지하광산 현장실험)

  • Kim, Heonmoo;Choi, Yosoon
    • Tunnel and Underground Space
    • /
    • v.30 no.1
    • /
    • pp.76-86
    • /
    • 2020
  • In this study, a small autonomous driving robot was developed for underground mines using the Light Detection and Ranging (LiDAR) sensor. The developed robot measures the distances to the left and right wall surfaces using the LiDAR sensor, and automatically controls its steering to drive along the centerline of mine tunnel. A field experiment was conducted in an underground amethyst mine to test the driving performance of developed robot. During five repeated driving tests, the robot showed stable driving performance overall. There were no collision accidents with the wall of mine tunnel.

A New Application of Unsupervised Learning to Nighttime Sea Fog Detection

  • Shin, Daegeun;Kim, Jae-Hwan
    • Asia-Pacific Journal of Atmospheric Sciences
    • /
    • v.54 no.4
    • /
    • pp.527-544
    • /
    • 2018
  • This paper presents a nighttime sea fog detection algorithm incorporating unsupervised learning technique. The algorithm is based on data sets that combine brightness temperatures from the $3.7{\mu}m$ and $10.8{\mu}m$ channels of the meteorological imager (MI) onboard the Communication, Ocean and Meteorological Satellite (COMS), with sea surface temperature from the Operational Sea Surface Temperature and Sea Ice Analysis (OSTIA). Previous algorithms generally employed threshold values including the brightness temperature difference between the near infrared and infrared. The threshold values were previously determined from climatological analysis or model simulation. Although this method using predetermined thresholds is very simple and effective in detecting low cloud, it has difficulty in distinguishing fog from stratus because they share similar characteristics of particle size and altitude. In order to improve this, the unsupervised learning approach, which allows a more effective interpretation from the insufficient information, has been utilized. The unsupervised learning method employed in this paper is the expectation-maximization (EM) algorithm that is widely used in incomplete data problems. It identifies distinguishing features of the data by organizing and optimizing the data. This allows for the application of optimal threshold values for fog detection by considering the characteristics of a specific domain. The algorithm has been evaluated using the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) vertical profile products, which showed promising results within a local domain with probability of detection (POD) of 0.753 and critical success index (CSI) of 0.477, respectively.

Development of Smart Mobility System for Persons with Disabilities (장애인을 위한 스마트 모빌리티 시스템 개발)

  • Yu, Yeong Jun;Park, Se Eun;An, Tae Jun;Yang, Ji Ho;Lee, Myeong-Gyu;Lee, Chul-Hee
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.97-103
    • /
    • 2022
  • Low fertility rates and increased life expectancy further exacerbate the process of an aging society. This is also reflected in the gradual increase in the proportion of vulnerable groups in the social population. The demand for improved mobility among vulnerable groups such as the elderly or the disabled has greatly driven the growth of the electric-assisted mobility device market. However, such mobile devices generally require a certain operating capability, which limits the range of vulnerable groups who can use the device and increases the cost of learning. Therefore, autonomous driving technology needs to be introduced to make mobility easier for a wider range of vulnerable groups to meet their needs of work and leisure in different environments. This study uses mini PC Odyssey, Velodyne Lidar VLP-16, electronic device and Linux-based ROS program to realize the functions of working environment recognition, simultaneous localization, map generation and navigation of electric powered mobile devices for vulnerable groups. This autonomous driving mobility device is expected to be of great help to the vulnerable who lack the immediate response in dangerous situations.

Autonomous Vehicles as Safety and Security Agents in Real-Life Environments

  • Al-Absi, Ahmed Abdulhakim
    • International journal of advanced smart convergence
    • /
    • v.11 no.2
    • /
    • pp.7-12
    • /
    • 2022
  • Safety and security are the topmost priority in every environment. With the aid of Artificial Intelligence (AI), many objects are becoming more intelligent, conscious, and curious of their surroundings. The recent scientific breakthroughs in autonomous vehicular designs and development; powered by AI, network of sensors and the rapid increase of Internet of Things (IoTs) could be utilized in maintaining safety and security in our environments. AI based on deep learning architectures and models, such as Deep Neural Networks (DNNs), is being applied worldwide in the automotive design fields like computer vision, natural language processing, sensor fusion, object recognition and autonomous driving projects. These features are well known for their identification, detective and tracking abilities. With the embedment of sensors, cameras, GPS, RADAR, LIDAR, and on-board computers in many of these autonomous vehicles being developed, these vehicles can properly map their positions and proximity to everything around them. In this paper, we explored in detail several ways in which these enormous features embedded in these autonomous vehicles, such as the network of sensors fusion, computer vision and natural image processing, natural language processing, and activity aware capabilities of these automobiles, could be tapped and utilized in safeguarding our lives and environment.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Localization of hotspots via a lightweight system combining Compton imaging with a 3D lidar camera

  • Mattias Simons;David De Schepper;Eric Demeester;Wouter Schroeyers
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3188-3198
    • /
    • 2024
  • Efficient and secure decommissioning of nuclear facilities demands advanced technologies. In this context, gamma-ray detection and imaging are crucial in identifying radioactive hotspots and monitoring radiation levels. Our study is dedicated to developing a gamma-ray detection system tailored for integration into robotic platforms for nuclear decommissioning, offering a safe and automated solution for this intricate task and ensuring the safety of human operators by mitigating radiation exposure and streamlining hotspot localization. Our approach integrates a Compton camera based 3D reconstruction algorithm with a single Timepix3 detector. This eliminates the need for a second detector and significantly reduces system weight and cost. Additionally, combining a 3D camera with the setup enhances hotspot visualization and interpretation, rendering it an ideal solution for practical nuclear decommissioning applications. In a proof-of-concept measurement utilizing a 137Cs source, our system accurately localized and visualized the source in 3D with an angular error of 1° and estimated the activity with a 3% relative error. This promising result underscores the system's potential for deployment in real-world decommissioning settings. Future endeavors will expand the technology's applications in authentic decommissioning scenarios and optimize its integration with robotic platforms. The outcomes of our study contribute to heightened safety and accuracy for nuclear decommissioning works through the advancement of cost-effective and efficient gamma-ray detection systems.