• Title/Summary/Keyword: 3D PointCloud

Search Result 388, Processing Time 0.024 seconds

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Experiment of Computation of Ground Cutting Volume Using Terrestrial LiDAR Data (지상 LiDAR 자료의 절토량 산정 실험)

  • Kim, Jong-Hwa;Pyeon, Mu-Wook;Kim, Sang-Kuk;Hwang, Yeon-Soo;Kang, Nam-Gi
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.2
    • /
    • pp.11-17
    • /
    • 2009
  • Terrestrial LiDAR can measure high capacity 3D-topography coordinates and try to apply to various public works such as tunnel surveying, facility deformation surveying. This experiment is about how to calculate ground cutting volume because the stage of the earth work spend lots of money and time among civil engineering works. Surveying cutting area using Terrestrial LiDAR and then calculating cutting area in planned area comparing sectional plan before construction and planned section and LiDAR data. Also, the values of the calculating ground cutting volume by three different resolution LiDAR has are compared and analyzed.

  • PDF

Considerations for Developing a SLAM System for Real-time Remote Scanning of Building Facilities (건축물 실시간 원격 스캔을 위한 SLAM 시스템 개발 시 고려사항)

  • Kang, Tae-Wook
    • Journal of KIBIM
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2020
  • In managing building facilities, spatial information is the basic data for decision making. However, the method of acquiring spatial information is not easy. In many cases, the site and drawings are often different due to changes in facilities and time after construction. In this case, the site data should be scanned to obtain spatial information. The scan data actually contains spatial information, which is a great help in making space related decisions. However, to obtain scan data, an expensive LiDAR (Light Detection and Ranging) device must be purchased, and special software for processing data obtained from the device must be available.Recently, SLAM (Simultaneous localization and mapping), an advanced map generation technology, has been spreading in the field of robotics. Using SLAM, 3D spatial information can be obtained quickly in real time without a separate matching process. This study develops and tests whether SLAM technology can be used to obtain spatial information for facility management. This draws considerations for developing a SLAM device for real-time remote scanning for facility management. However, this study focuses on the system development method that acquires spatial information necessary for facility management through SLAM technology. To this end, we develop a prototype, analyze the pros and cons, and then suggest considerations for developing a SLAM system.

3D Multi-floor Precision Mapping and Localization for Indoor Autonomous Robots (실내 자율주행 로봇을 위한 3차원 다층 정밀 지도 구축 및 위치 추정 알고리즘)

  • Kang, Gyuree;Lee, Daegyu;Shim, Hyunchul
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.1
    • /
    • pp.25-31
    • /
    • 2022
  • Moving among multiple floors is one of the most challenging tasks for indoor autonomous robots. Most of the previous researches for indoor mapping and localization have focused on singular floor environment. In this paper, we present an algorithm that creates a multi-floor map using 3D point cloud. We implement localization within the multi-floor map using a LiDAR and an IMU. Our algorithm builds a multi-floor map by constructing a single-floor map using a LOAM-based algorithm, and stacking them through global registration that aligns the common sections in the map of each floor. The localization in the multi-floor map was performed by adding the height information to the NDT (Normal Distribution Transform)-based registration method. The mean error of the multi-floor map showed 0.29 m and 0.43 m errors in the x, and y-axis, respectively. In addition, the mean error of yaw was 1.00°, and the error rate of height was 0.063. The real-world test for localization was performed on the third floor. It showed the mean square error of 0.116 m, and the average differential time of 0.01 sec. This study will be able to help indoor autonomous robots to operate on multiple floors.

Design of a foot shape extraction system for foot parameter measurement (발 고유 변인 측정을 위한 발 형상 추출 시스템 설계)

  • Yun, Jeongrok;Kim, Hoemin;Kim, Unyong;Chun, Sungkuk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.421-422
    • /
    • 2020
  • 발 고유 변인 측정 및 데이터의 수집은 소비자의 발 건강을 위한 신발 제작을 위하여 필요하다. 신발의 설계 지표 또한 개정의 필요성이 제시되고 있어 발 고유 변인 측정의 및 데이터 획득에 관한 연구의 필요성이 증대되고 있다. 본 논문에서는 발 형태의 데이터 값을 산출하여 사용자에게 적합한 맞춤형 인솔 및 신발을 제작하고, 신발의 설계 지표를 산출하기 위하여 발 고유 변인의 데이터 값을 자동으로 측정이 가능한 발 고유 변인 산출이 가능한 발 형상 추출 시스템에 대해 서술한다. 이를 위해 사용자의 발 고유 변인 측정을위한 스캐닝 스테이지를 설계 및 제작하고, 3대의 깊이 카메라를 설치하였다. 잡음 및 배경을 제거하기 위해 가우시안 배경 모델링으로 전경 영역을 분리하여 발 점군 데이터를 획득 한 후, Euclidean transformation을 통해 각 점군 데이터를 정합한다. 실험 결과에서는 획득된 발 형상 점군 데이터와 접지면 형상 및 발 변인 추출 결과를 보여준다.

  • PDF

AR-Based Character Tracking Navigation System Development (AR기반 캐릭터 트래킹 네비게이션 시스템 개발)

  • Lee, SeokHwan;Lee, JungKeum;Sim, Hyun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.2
    • /
    • pp.325-332
    • /
    • 2022
  • In this study, real-time character navigation using AR lens developed by Nreal is developed. Real-time character navigation is not possible with general marker-based AR because NPC characters must guide while moving in an unspecified space. To replace this, a markerless AR system was developed using Digital Twin technology. Existing markerless AR is operated based on hardware such as GPS, gyroscope, and magnetic sensor, so location accuracy is low and processing time in the system is long, which results low reliability in real-time AR environment. In order to solve this problem, using the SLAM technique to construct a space into a 3D object and to construct a markerless AR based on point location, AR can be implemented without any hardware intervention in a real-time AR environment. This real-time AR environment configuration made it possible to implement a navigation system using characters in tourist attractions such as Suncheon Bay Garden and Suncheon Drama Filming Site.

The Study on Selection of human Model for Controllability Evaluation According to Working Postures

  • Kim, Do-Hoon;Park, Sung-Joon;Lim, Young-Jae;Jung, Eui-S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.437-444
    • /
    • 2012
  • The purpose of this study was to suggest appropriate human model for ergonomic evaluation considering working postures on 3D space. Background: Traditionally extreme design rules have been widely utilized at the stage of designing products. Body size of 5th percentile and 95th percentile in stature has been generally selected for controllability and clearance evaluation, respectively. However, these rules had limitations in reflecting working posture in ergonomic evaluation. Method: In order to define working posture on 3D space, not only sagittal plane but also lateral plane was considered. Kinematic linkage body model was utilized for representation of working posture. By utilizing the anthropometric data of 2,836 South Korean male populations, the point cloud for end points of linkage models was derived. The individuals who were lacking in certain controllability were selected as human models for the evaluation. Result: In case of standing posture it was found that conventional approach is proper for all controllability evaluations. Contrary to standing posture, tall people had less controllability on control location below shoulder point in sitting posture. Conclusion: From the derived proper range on controllability, ergonomic evaluation rule was suggested according to working posture especially in standing and sitting. Application: The results of the study are expected to aid in selection of appropriate human model for ergonomic evaluation and to improve the usability of products and work space.

Development of 3D Crop Segmentation Model in Open-field Based on Supervised Machine Learning Algorithm (지도학습 알고리즘 기반 3D 노지 작물 구분 모델 개발)

  • Jeong, Young-Joon;Lee, Jong-Hyuk;Lee, Sang-Ik;Oh, Bu-Yeong;Ahmed, Fawzy;Seo, Byung-Hun;Kim, Dong-Su;Seo, Ye-Jin;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.1
    • /
    • pp.15-26
    • /
    • 2022
  • 3D open-field farm model developed from UAV (Unmanned Aerial Vehicle) data could make crop monitoring easier, also could be an important dataset for various fields like remote sensing or precision agriculture. It is essential to separate crops from the non-crop area because labeling in a manual way is extremely laborious and not appropriate for continuous monitoring. We, therefore, made a 3D open-field farm model based on UAV images and developed a crop segmentation model using a supervised machine learning algorithm. We compared performances from various models using different data features like color or geographic coordinates, and two supervised learning algorithms which are SVM (Support Vector Machine) and KNN (K-Nearest Neighbors). The best approach was trained with 2-dimensional data, ExGR (Excess of Green minus Excess of Red) and z coordinate value, using KNN algorithm, whose accuracy, precision, recall, F1 score was 97.85, 96.51, 88.54, 92.35% respectively. Also, we compared our model performance with similar previous work. Our approach showed slightly better accuracy, and it detected the actual crop better than the previous approach, while it also classified actual non-crop points (e.g. weeds) as crops.

Geometric and structural assessment and reverse engineering of a steel-framed building using 3D laser scanning

  • Arum Jang;Sanggi Jeong;Hunhee Cho;Donghwi Jung;Young K. Ju;Ji-sang Kim;Donghyuk Jung
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.595-603
    • /
    • 2024
  • In the construction industry, there has been a surge in the implementation of high-tech equipment in recent years. Various technologies are being considered as potential solutions for future construction projects. Building information modeling (BIM), which utilizes advanced equipment, is a promising solution among these technologies. The need for safety inspection has also increased with the aging structures. Nevertheless, traditional safety inspection technology falls short of meeting this demand as it heavily relies on the subjective opinions of workers. This inadequacy highlights the need for advancements in existing maintenance technology. Research on building safety inspection using 3D laser scanners has notably increased. Laser scanners that use light detection and ranging (LiDAR) can quickly and accurately acquire producing information, which can be realized through reverse engineering by modeling point cloud data. This study introduces an innovative evaluation system for building safety using a 3D laser scanner. The system was used to assess the safety of an existing three-story building by implementing a reverse engineering technique. The 3D digital data are obtained from the scanner to detect defects and deflections in and outside the building and to create an as-built BIM. Subsequently, the as-built structural model of the building was generated using the reverse engineering approach and used for structural analysis. The acquired information, including deformations and dimensions, is compared with the expected values to evaluate the effectiveness of the proposed technique.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.