• Title/Summary/Keyword: Image-Based Point Cloud

Search Result 113, Processing Time 0.029 seconds

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

Gradient field based method for segmenting 3D point cloud (Gradient Field 기반 3D 포인트 클라우드 지면분할 기법)

  • Vu, Hoang;Chu, Phuong;Cho, Seoungjae;Zhang, Weiqiang;Wen, Mingyun;Sim, Sungdae;Kwak, Kiho;Cho, Kyungeun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.733-734
    • /
    • 2016
  • This study proposes a novel approach for ground segmentation of 3D point cloud. We combine two techniques: gradient threshold segmentation, and mean height evaluation. Acquired 3D point cloud is represented as a graph data structures by exploiting the structure of 2D reference image. The ground parts nearing the position of the sensor are segmented based on gradient threshold technique. For sparse regions, we separate the ground and nonground by using a technique called mean height evaluation. The main contribution of this study is a new ground segmentation algorithm which works well with 3D point clouds from various environments. The processing time is acceptable and it allows the algorithm running in real time.

LiDAR Sensor based Object Classification System for Delivery Robot Applications (배달 로봇 응용을 위한 LiDAR 센서 기반 객체 분류 시스템)

  • Woo-Jin Park;Jeong-Gyu Lee;Chae-woon Park;Yunho Jung
    • Journal of IKEEE
    • /
    • v.28 no.3
    • /
    • pp.375-381
    • /
    • 2024
  • In this paper, we propose a lightweight object classification system using a LiDAR sensor for delivery service robots. The 3D point cloud data is encoded into a 2D pseudo image using a Pillar Feature Network (PFN), and then passed through a lightweight classification network designed based on Depthwise Separable Convolutional Neural Networks (DS-CNN). The implementation results show that the designed classification network has 9.08K parameters and 3.49M Multiply-Accumulate (MAC) operations, while supporting a classification accuracy of 94.94%.

A Study on the Improvement of UAV based 3D Point Cloud Spatial Object Location Accuracy using Road Information (도로정보를 활용한 UAV 기반 3D 포인트 클라우드 공간객체의 위치정확도 향상 방안)

  • Lee, Jaehee;Kang, Jihun;Lee, Sewon
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.705-714
    • /
    • 2019
  • Precision positioning is necessary for various use of high-resolution UAV images. Basically, GCP is used for this purpose, but in case of emergency situations or difficulty in selecting GCPs, the data shall be obtained without GCPs. This study proposed a method of improving positional accuracy for x, y coordinate of UAV based 3 dimensional point cloud data generated without GCPs. Road vector file by the public data (Open Data Portal) was used as reference data for improving location accuracy. The geometric correction of the 2 dimensional ortho-mosaic image was first performed and the transform matrix produced in this process was adopted to apply to the 3 dimensional point cloud data. The straight distance difference of 34.54 m before the correction was reduced to 1.21 m after the correction. By confirming that it is possible to improve the location accuracy of UAV images acquired without GCPs, it is expected to expand the scope of use of 3 dimensional spatial objects generated from point cloud by enabling connection and compatibility with other spatial information data.

Advanced 360-Degree Integral-Floating Display Using a Hidden Point Removal Operator and a Hexagonal Lens Array

  • Erdenebat, Munkh-Uchral;Kwon, Ki-Chul;Dashdavaa, Erkhembaatar;Piao, Yan-Ling;Yoo, Kwan-Hee;Baasantseren, Ganbat;Kim, Youngmin;Kim, Nam
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.706-713
    • /
    • 2014
  • An enhanced 360-degree integral-floating three-dimensional display system using a hexagonal lens array and a hidden point removal operator is proposed. Only the visible points of the chosen three-dimensional point cloud model are detected by the hidden point removal operator for each rotating step of the anamorphic optics system, and elemental image arrays are generated for the detected visible points from the corresponding viewpoint. Each elemental image of the elemental image array is generated by a hexagonal grid, due to being captured through a hexagonal lens array. The hidden point removal operator eliminates the overlap problem of points in front and behind, and the hexagonal lens array captures the elemental image arrays with more accurate approximation, so in the end the quality of the displayed image is improved. In an experiment, an anamorphic-optics-system-based 360-degree integral-floating display with improved image quality is demonstrated.

SAR(Synthetic Aperture Radar) 3-Dimensional Scatterers Point Cloud Target Model and Experiments on Bridge Area (영상레이더(SAR)용 3차원 산란점 점구름 표적모델의 교량 지역에 대한 적용)

  • Jong Hoo Park;Sang Chul Park
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.3
    • /
    • pp.1-8
    • /
    • 2023
  • Modeling of artificial targets in Synthetic Aperture radar (SAR) mainly simulates radar signals reflected from the faces and edges of the 3D Computer Aided Design (CAD) model with a ray-tracing method, and modeling of the clutter on the Earth's surface uses a method of distinguishing types with similar distribution characteristics through statistical analysis of the SAR image itself. In this paper, man-made targets on the surface and background clutter on the terrain are integrated and made into a three-dimensional (3D) point cloud scatterer model, and SAR image were created through computational signal processing. The results of the SAR Stripmap image generation of the actual automobile based SAR radar system and the results analyzed using EM modeling or statistical distribution models are compared with this 3D point cloud scatterer model. The modeling target is selected as an bridge because it has the characteristic of having both water surface and ground terrain around the bridge and is also a target of great interest in both military and civilian use.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Updating Smartphone's Exterior Orientation Parameters by Image-based Localization Method Using Geo-tagged Image Datasets and 3D Point Cloud as References

  • Wang, Ying Hsuan;Hong, Seunghwan;Bae, Junsu;Choi, Yoonjo;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.5
    • /
    • pp.331-341
    • /
    • 2019
  • With the popularity of sensor-rich environments, smartphones have become one of the major platforms for obtaining and sharing information. Since it is difficult to utilize GNSS (Global Navigation Satellite System) inside the area with many buildings, the localization of smartphone in this case is considered as a challenging task. To resolve problem of localization using smartphone a four step image-based localization method and procedure is proposed. To improve the localization accuracy of smartphone datasets, MMS (Mobile Mapping System) and Google Street View were utilized. In our approach first, the searching for candidate matching image is performed by the query image of smartphone's using GNSS observation. Second, the SURF (Speed-Up Robust Features) image matching between the smartphone image and reference dataset is done and the wrong matching points are eliminated. Third, the geometric transformation is performed using the matching points with 2D affine transformation. Finally, the smartphone location and attitude estimation are done by PnP (Perspective-n-Point) algorithm. The location of smartphone GNSS observation is improved from the original 10.204m to a mean error of 3.575m. The attitude estimation is lower than 25 degrees from the 92.4% of the adjsuted images with an average of 5.1973 degrees.

Development of Mean Stand Height Module Using Image-Based Point Cloud and FUSION S/W (영상 기반 3차원 점군과 FUSION S/W 기반의 임분고 분석 모듈 개발)

  • KIM, Kyoung-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.19 no.4
    • /
    • pp.169-185
    • /
    • 2016
  • Recently mean stand height has been added as new attribute to forest type maps, but it is often too costly and time consuming to manually measure 9,100,000 points from countrywide stereo aerial photos. In addition, tree heights are frequently measured around tombs and forest edges, which are poor representations of the interior tree stand. This work proposes an estimation of mean stand height using an image-based point cloud, which was extracted from stereo aerial photo with FUSION S/W. Then, a digital terrain model was created by filtering the DSM point cloud and subtracting the DTM from DSM, resulting in nDSM, which represents object heights (buildings, trees, etc.). The RMSE was calculated to compare differences in tree heights between those observed and extracted from the nDSM. The resulting RMSE of average total plot height was 0.96 m. Individual tree heights of the whole study site area were extracted using the USDA Forest Service's FUSION S/W. Finally, mean stand height was produced by averaging individual tree heights in a stand polygon of the forest type map. In order to automate the mean stand height extraction using photogrammetric methods, a module was developed as an ArcGIS add-in toolbox.