• Title/Summary/Keyword: Cloud point extraction

Search Result 72, Processing Time 0.032 seconds

Water Depth and Riverbed Surveying Using Airborne Bathymetric LiDAR System - A Case Study at the Gokgyo River (항공수심라이다를 활용한 하천 수심 및 하상 측량에 관한 연구 - 곡교천 사례를 중심으로)

  • Lee, Jae Bin;Kim, Hye Jin;Kim, Jae Hak;Wie, Gwang Jae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.4
    • /
    • pp.235-243
    • /
    • 2021
  • River surveying is conducted to acquire basic geographic data for river master plans and various river maintenance, and it is also used to predict changes after river maintenance construction. ABL (Airborne Bathymetric LiDAR) system is a cutting-edge surveying technology that can simultaneously observe the water surface and river bed using a green laser, and has many advantages in river surveying. In order to use the ABL data for river surveying, it is prerequisite step to segment and extract the water surface and river bed points from the original point cloud data. In this study, point cloud segmentation was performed by applying the ground filtering technique, ATIN (Adaptive Triangular Irregular Network) to the ABL data and then, the water surface and riverbed point clouds were extracted sequentially. In the Gokgyocheon river area, Chungcheongnam-do, the experiment was conducted with the dataset obtained using the Leica Chiroptera 4X sensor. As a result of the study, the overall classification accuracy for the water surface and riverbed was 88.8%, and the Kappa coefficient was 0.825, confirming that the ABL data can be effectively used for river surveying.

3D Extraction Method Using a Low Cost Line Laser (라인레이저를 이용한 3D 모델 추출 방법)

  • Yun, Chun Ho;Kim, Tae Gi;Cho, Yong Wook;Nam, Gi Won;Yim, Choong Hyuk
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.108-113
    • /
    • 2017
  • In this paper, we proposed a three-dimensional(3D) scanning system based on laser vision technique for 3D model reconstruction. The proposed scanning system consists of line laser, camera, and turntable. We implemented the 3D scanning system using low quality elements. Although these are low quality elements, we reduced the 3D data reconstruction errors greatly using two methods. First, we developed a maximum brightness detection algorithm. This algorithm extracts the maximum brightness of the line laser to obtain the shape of the object. Second, we designed a new laser control device. This device helps to adjust the relative position of the turntable and line laser. These two methods greatly reduce the measuring noise. As a result, point cloud data can be obtained without complicated calculations.

Multi-facet 3D Scanner Based on Stripe Laser Light Image (선형 레이저 광 영상기반 다면 3 차원 스캐너)

  • Ko, Young-Jun;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.811-816
    • /
    • 2016
  • In light of recently developed 3D printers for rapid prototyping, there is increasing attention on the 3D scanner as a 3D data acquisition system for an existing object. This paper presents a prototypical 3D scanner based on a striped laser light image. In order to solve the problem of shadowy areas, the proposed 3D scanner has two cameras with one laser light source. By using a horizontal rotation table and a rotational arm rotating about the latitudinal axis, the scanner is able to scan in all directions. To remove an additional optical filter for laser light pixel extraction of an image, we have adopted a differential image method with laser light modulation. Experimental results show that the scanner's 3D data acquisition performance exhibited less than 0.2 mm of measurement error. Therefore, this scanner has proven that it is possible to reconstruct an object's 3D surface from point cloud data using a 3D scanner, enabling reproduction of the object using a commercially available 3D printer.

3D Object Detection with Low-Density 4D Imaging Radar PCD Data Clustering and Voxel Feature Extraction for Each Cluster (4D 이미징 레이더의 저밀도 PCD 데이터 군집화와 각 군집에 복셀 특징 추출 기법을 적용한 3D 객체 인식 기법)

  • Cha-Young, Oh;Soon-Jae, Gwon;Hyun-Jung, Jung;Gu-Min, Jeong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.6
    • /
    • pp.471-476
    • /
    • 2022
  • In this paper, we propose an object detection using a 4D imaging radar, which developed to solve the problems of weak cameras and LiDAR in bad weather. When data are measured and collected through a 4D imaging radar, the density of point cloud data is low compared to LiDAR data. A technique for clustering objects and extracting the features of objects through voxels in the cluster is proposed using the characteristics of wide distances between objects due to low density. Furthermore, we propose an object detection using the extracted features.

Bird's Eye View Semantic Segmentation based on Improved Transformer for Automatic Annotation

  • Tianjiao Liang;Weiguo Pan;Hong Bao;Xinyue Fan;Han Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.1996-2015
    • /
    • 2023
  • High-definition (HD) maps can provide precise road information that enables an autonomous driving system to effectively navigate a vehicle. Recent research has focused on leveraging semantic segmentation to achieve automatic annotation of HD maps. However, the existing methods suffer from low recognition accuracy in automatic driving scenarios, leading to inefficient annotation processes. In this paper, we propose a novel semantic segmentation method for automatic HD map annotation. Our approach introduces a new encoder, known as the convolutional transformer hybrid encoder, to enhance the model's feature extraction capabilities. Additionally, we propose a multi-level fusion module that enables the model to aggregate different levels of detail and semantic information. Furthermore, we present a novel decoupled boundary joint decoder to improve the model's ability to handle the boundary between categories. To evaluate our method, we conducted experiments using the Bird's Eye View point cloud images dataset and Cityscapes dataset. Comparative analysis against stateof-the-art methods demonstrates that our model achieves the highest performance. Specifically, our model achieves an mIoU of 56.26%, surpassing the results of SegFormer with an mIoU of 1.47%. This innovative promises to significantly enhance the efficiency of HD map automatic annotation.

Design of a foot shape extraction system for foot parameter measurement (발 고유 변인 측정을 위한 발 형상 추출 시스템 설계)

  • Yun, Jeongrok;Kim, Hoemin;Kim, Unyong;Chun, Sungkuk
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.421-422
    • /
    • 2020
  • 발 고유 변인 측정 및 데이터의 수집은 소비자의 발 건강을 위한 신발 제작을 위하여 필요하다. 신발의 설계 지표 또한 개정의 필요성이 제시되고 있어 발 고유 변인 측정의 및 데이터 획득에 관한 연구의 필요성이 증대되고 있다. 본 논문에서는 발 형태의 데이터 값을 산출하여 사용자에게 적합한 맞춤형 인솔 및 신발을 제작하고, 신발의 설계 지표를 산출하기 위하여 발 고유 변인의 데이터 값을 자동으로 측정이 가능한 발 고유 변인 산출이 가능한 발 형상 추출 시스템에 대해 서술한다. 이를 위해 사용자의 발 고유 변인 측정을위한 스캐닝 스테이지를 설계 및 제작하고, 3대의 깊이 카메라를 설치하였다. 잡음 및 배경을 제거하기 위해 가우시안 배경 모델링으로 전경 영역을 분리하여 발 점군 데이터를 획득 한 후, Euclidean transformation을 통해 각 점군 데이터를 정합한다. 실험 결과에서는 획득된 발 형상 점군 데이터와 접지면 형상 및 발 변인 추출 결과를 보여준다.

  • PDF

Extraction of 3D Objects Around Roads Using MMS LiDAR Data (MMS LiDAR 자료를 이용한 도로 주변 3차원 객체 추출)

  • CHOUNG, Yun-Jae
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.152-161
    • /
    • 2017
  • Making precise 3D maps using Mobile Mapping System (MMS) sensors are essential for the development of self-driving cars. This paper conducts research on the extraction of 3D objects around the roads using the point cloud acquired by the MMS Light Detection and Ranging (LiDAR) sensor through the following steps. First, the digital surface model (DSM) is generated using MMS LiDAR data, and then the slope map is generated from the DSM. Next, the 3D objects around the roads are identified using the slope information. Finally, 97% of the 3D objects around the roads are extracted using the morphological filtering technique. This research contributes a plan for the application of automated driving technology by extracting the 3D objects around the roads using spatial information data acquired by the MMS sensor.

Robust Hand Region Extraction Using a Joint-based Model (관절 기반의 모델을 활용한 강인한 손 영역 추출)

  • Jang, Seok-Woo;Kim, Sul-Ho;Kim, Gye-Young
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.9
    • /
    • pp.525-531
    • /
    • 2019
  • Efforts to utilize human gestures to effectively implement a more natural and interactive interface between humans and computers have been ongoing in recent years. In this paper, we propose a new algorithm that accepts consecutive three-dimensional (3D) depth images, defines a hand model, and robustly extracts the human hand region based on six palm joints and 15 finger joints. Then, the 3D depth images are adaptively binarized to exclude non-interest areas, such as the background, and accurately extracts only the hand of the person, which is the area of interest. Experimental results show that the presented algorithm detects only the human hand region 2.4% more accurately than the existing method. The hand region extraction algorithm proposed in this paper is expected to be useful in various practical applications related to computer vision and image processing, such as gesture recognition, virtual reality implementation, 3D motion games, and sign recognition.

Robust Estimation of Hand Poses Based on Learning (학습을 이용한 손 자세의 강인한 추정)

  • Kim, Sul-Ho;Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1528-1534
    • /
    • 2019
  • Recently, due to the popularization of 3D depth cameras, new researches and opportunities have been made in research conducted on RGB images, but estimation of human hand pose is still classified as one of the difficult topics. In this paper, we propose a robust estimation method of human hand pose from various input 3D depth images using a learning algorithm. The proposed approach first generates a skeleton-based hand model and then aligns the generated hand model with three-dimensional point cloud data. Then, using a random forest-based learning algorithm, the hand pose is strongly estimated from the aligned hand model. Experimental results in this paper show that the proposed hierarchical approach makes robust and fast estimation of human hand posture from input depth images captured in various indoor and outdoor environments.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.