• Title/Summary/Keyword: Image-Based Point Cloud

Search Result 110, Processing Time 0.025 seconds

LiDAR Chip for Automated Geo-referencing of High-Resolution Satellite Imagery (라이다 칩을 이용한 고해상도 위성영상의 자동좌표등록)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.319-326
    • /
    • 2014
  • The accurate geo-referencing processes that apply ground control points is prerequisite for effective end use of HRSI (High-resolution satellite imagery). Since the conventional control point acquisition by human operator takes long time, demands for the automated matching to existing reference data has been increasing its popularity. Among many options of reference data, the airborne LiDAR (Light Detection And Ranging) data shows high potential due to its high spatial resolution and vertical accuracy. Additionally, it is in the form of 3-dimensional point cloud free from the relief displacement. Recently, a new matching method between LiDAR data and HRSI was proposed that is based on the image projection of whole LiDAR data into HRSI domain, however, importing and processing the large amount of LiDAR data considered as time-consuming. Therefore, we wmotivated to ere propose a local LiDAR chip generation for the HRSI geo-referencing. In the procedure, a LiDAR point cloud was rasterized into an ortho image with the digital elevation model. After then, we selected local areas, which of containing meaningful amount of edge information to create LiDAR chips of small data size. We tested the LiDAR chips for fully-automated geo-referencing with Kompsat-2 and Kompsat-3 data. Finally, the experimental results showed one-pixel level of mean accuracy.

A New Calibration of 3D Point Cloud using 3D Skeleton (3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션)

  • Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.247-257
    • /
    • 2021
  • This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

Non-contact mobile inspection system for tunnels: a review (터널의 비접촉 이동식 상태점검 장비: 리뷰)

  • Chulhee Lee;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.3
    • /
    • pp.245-259
    • /
    • 2023
  • The purpose of this paper is to examine the most recent tunnel scanning systems to obtain insights for the development of non-contact mobile inspection system. Tunnel scanning systems are mostly being developed by adapting two main technologies, namely laser scanning and image scanning systems. Laser scanning system has the advantage of accurately recreating the geometric characteristics of tunnel linings from point cloud. On the other hand, image scanning system employs computer vision to effortlessly identify damage, such as fine cracks and leaks on the tunnel lining surface. The analysis suggests that image scanning system is more suitable for detecting damage on tunnel linings. A camera-based tunnel scanning system under development should include components such as lighting, data storage, power supply, and image-capturing controller synchronized with vehicle speed.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

DISTANCE MEASUREMENT IN THE AEC/FM INDUSTRY: AN OVERVIEW OF TECHNOLOGIES

  • Jasmine Hines;Abbas Rashidi;Ioannis Brilakis
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.616-623
    • /
    • 2013
  • One of the oldest, most common engineering problems is measuring the dimensions of different objects and the distances between locations. In AEC/FM, related uses vary from large-scale applications such as measuring distances between cities to small-scale applications such as measuring the depth of a crack or the width of a welded joint. Within the last few years, advances in applying new technologies have prompted the development of new measuring devices such as ultrasound and laser-based measurers. Because of wide varieties in type, associated costs, and levels of accuracy, the selection of an optimal measuring technology is challenging for construction engineers and facility managers. To tackle this issue, we present an overview of various measuring technologies adopted by experts in the area of AEC/FM. As the next step, to evaluate the performance of these technologies, we select one indoor and one outdoor case and measure several dimensions using six categories of technologies: tapes, total stations, laser measurers, ultrasound devices, laser scanners, and image-based technologies. Then we evaluate the results according to various metrics such as accuracy, ease of use, operation time, associated costs, compare these results, and recommend optimal technologies for specific applications. The results also revealed that in most applications, computer vision-based technologies outperform traditional devices in terms of ease of use, associated costs, and accuracy.

  • PDF

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Comparison of Open Source based Algorithms and Filtering Methods for UAS Image Processing (오픈소스 기반 UAS 영상 재현 알고리즘 및 필터링 기법 비교)

  • Kim, Tae Hee;Lee, Yong Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.2
    • /
    • pp.155-168
    • /
    • 2020
  • Open source is a key growth engine of the 4th industrial revolution, and the continuous development and use of various algorithms for image processing is expected. The purpose of this study is to examine the effectiveness of the UAS image processing open source based algorithm by comparing and analyzing the water reproduction and moving object filtering function and the time required for data processing in 3D reproduction. Five matching algorithms were compared based on recall and processing speed through the 'ANN-Benchmarks' program, and HNSW (Hierarchical Navigable Small World) matching algorithm was judged to be the best. Based on this, 108 algorithms for image processing were constructed by combining each methods of triangulation, point cloud data densification, and surface generation. In addition, the 3D reproduction and data processing time of 108 algorithms for image processing were studied for UAS (Unmanned Aerial System) images of a park adjacent to the sea, and compared and analyzed with the commercial image processing software 'Pix4D Mapper'. As a result of the study, the algorithms that are good in terms of reproducing water and filtering functions of moving objects during 3D reproduction were specified, respectively, and the algorithm with the lowest required time was selected, and the effectiveness of the algorithm was verified by comparing it with the result of 'Pix4D Mapper'.

A Study on Improving the Quality of DIBR Intermediate Images Using Meshes (메쉬를 활용한 DIBR 기반 중간 영상 화질 향상 방법 연구)

  • Kim, Jiseong;Kim, Minyoung;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.822-823
    • /
    • 2014
  • The usual method of generating an image for a multiview display system requires acquiring a color image and depth information of a reference camera. Then, intermediate images, generated using DIBR method, will be captured at a number of different viewpoints and composed to construct an multiview image. When such intermediate views are generated, several holes would be shown because some hidden parts are shown when the screenshot is taken at different angle. Previous research tried to solve this problem by creating a new hole-filling algorithm or enhancing the depth information. This paper describes a new method of enhancing the intermediate view images by applying the Ball Pivoting algorithm, which constructs meshes from a point cloud. When the new method is applied to the Microsoft's "Ballet" and "Break Dancer" data sets, PSNR comparison shows that about 0.18~1.19 increasement. This paper will explaing the new algorithm and the experiment method and results.

  • PDF

Comparative Analysis of Filtering Techniques for Vegetation Points Removal from Photogrammetric Point Clouds at the Stream Levee (하천 제방의 영상 점군에서 식생 점 제거 필터링 기법 비교 분석)

  • Park, Heeseong;Lee, Du Han
    • Ecology and Resilient Infrastructure
    • /
    • v.8 no.4
    • /
    • pp.233-244
    • /
    • 2021
  • This study investigated the application of terrestrial light detection and ranging (LiDAR) to inspect the defects of the vegetated levee. The accuracy of vegetation filtering techniques was compared by applying filtering techniques on photogrammetric point clouds of a vegetated levee generated by terrestrial LiDAR. Representative 10 vegetation filters such as CIVE, ExG, ExGR, ExR, MExG, NGRDI, VEG, VVI, ATIN, and ISL were applied to point cloud data of the Imjin River levee. The accuracy order of the 10 techniques based on the results was ISL, ATIN, ExR, NGRDI, ExGR, ExG, MExG, VVI, VEG, and CIVE. Color filters show certain limitations in the classification of vegetation and ground and classify grass flower image as ground. Morphological filters show a high accuracy of the classification, but they classify rocks as vegetation. Overall, morphological filters are superior to color filters; however, they take 10 times more computation time. For the improvement of the vegetation removal, combined filters of color and morphology should be studied.