• Title/Summary/Keyword: Point Cloud Fusion

Search Result 25, Processing Time 0.018 seconds

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.

Object Classification Using Point Cloud and True Ortho-image by Applying Random Forest and Support Vector Machine Techniques (랜덤포레스트와 서포트벡터머신 기법을 적용한 포인트 클라우드와 실감정사영상을 이용한 객체분류)

  • Seo, Hong Deok;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.405-416
    • /
    • 2019
  • Due to the development of information and communication technology, the production and processing speed of data is getting faster. To classify objects using machine learning, which is a field of artificial intelligence, data required for training can be easily collected due to the development of internet and geospatial information technology. In the field of geospatial information, machine learning is also being applied to classify or recognize objects using images and point clouds. In this study, the problem of manually constructing training data using existing digital map version 1.0 was improved, and the technique of classifying roads, buildings and vegetation using image and point clouds were proposed. Through experiments, it was possible to classify roads, buildings, and vegetation that could clearly distinguish colors when using true ortho-image with only RGB (Red, Green, Blue) bands. However, if the colors of the objects to be classified are similar, it was possible to identify the limitations of poor classification of the objects. To improve the limitations, random forest and support vector machine techniques were applied after band fusion of true ortho-image and normalized digital surface model, and roads, buildings, and vegetation were classified with more than 85% accuracy.

Improved Parameter Inference for Low-Cost 3D LiDAR-Based Object Detection on Clustering Algorithms (클러스터링 알고리즘에서 저비용 3D LiDAR 기반 객체 감지를 위한 향상된 파라미터 추론)

  • Kim, Da-hyeon;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.71-78
    • /
    • 2022
  • This paper proposes an algorithm for 3D object detection by processing point cloud data of 3D LiDAR. Unlike 2D LiDAR, 3D LiDAR-based data was too vast and difficult to process in three dimensions. This paper introduces various studies based on 3D LiDAR and describes 3D LiDAR data processing. In this study, we propose a method of processing data of 3D LiDAR using clustering techniques for object detection and design an algorithm that fuses with cameras for clear and accurate 3D object detection. In addition, we study models for clustering 3D LiDAR-based data and study hyperparameter values according to models. When clustering 3D LiDAR-based data, the DBSCAN algorithm showed the most accurate results, and the hyperparameter values of DBSCAN were compared and analyzed. This study will be helpful for object detection research using 3D LiDAR in the future.

Estimation of fresh weight for chinese cabbage using the Kinect sensor (키넥트를 이용한 배추 생체중 추정)

  • Lee, Sukin;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.2
    • /
    • pp.205-213
    • /
    • 2018
  • Development and validation of crop models often require measurements of biomass for the crop of interest. Considerable efforts would be needed to obtain a reasonable amount of biomass data because the destructive sampling of a given crop is usually used. The Kinect sensor, which has a combination of image and depth sensors, can be used for estimating crop biomass without using destructive sampling approach. This approach could provide more data sets for model development and validation. The objective of this study was to examine the applicability of the Kinect sensor for estimation of chinese cabbage fresh weight. The fresh weight of five chinese cabbage was measured and compared with estimates using the Kinect sensor. The estimates were obtained by scanning individual chinese cabbage to create point cloud, removing noise, and building a three dimensional model with a set of free software. It was found that the 3D model created using the Kinect sensor explained about 98.7% of variation in fresh weight of chinese cabbage. Furthermore, the correlation coefficient between estimates and measurements were highly significant, which suggested that the Kinect sensor would be applicable to estimation of fresh weight for chinese cabbage. Our results demonstrated that a depth sensor allows for a non-destructive sampling approach, which enables to collect observation data for crop fresh weight over time. This would help development and validation of a crop model using a large number of reliable data sets, which merits further studies on application of various depth sensors to crop dry weight measurements.