• Title/Summary/Keyword: RGB-D images

Search Result 109, Processing Time 0.022 seconds

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

Estimation of channel morphology using RGB orthomosaic images from drone - focusing on the Naesung stream - (드론 RGB 정사영상 기반 하도 지형 공간 추정 방법 - 내성천 중심으로 -)

  • Woo-Chul, KANG;Kyng-Su, LEE;Eun-Kyung, JANG
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.25 no.4
    • /
    • pp.136-150
    • /
    • 2022
  • In this study, a comparative review was conducted on how to use RGB images to obtain river topographic information, which is one of the most essential data for eco-friendly river management and flood level analysis. In terms of the topographic information of river zone, to obtain the topographic information of flow section is one of the difficult topic, therefore, this study focused on estimating the river topographic information of flow section through RGB images. For this study, the river topography surveying was directly conducted using ADCP and RTK-GPS, and at the same time, and orthomosiac image were created using high-resolution images obtained by drone photography. And then, the existing developed regression equations were applied to the result of channel topography surveying by ADCP and the band values of the RGB images, and the channel bathymetry in the study area was estimated using the regression equation that showed the best predictability. In addition, CCHE2D flow modeling was simulated to perform comparative verification of the topographical informations. The modeling result with the image-based topographical information provided better water depth and current velocity simulation results, when it compared to the directly measured topographical information for which measurement of the sub-section was not performed. It is concluded that river topographic information could be obtained from RGB images, and if additional research was conducted, it could be used as a method of obtaining efficient river topographic information for river management.

Efficient 3D Scene Labeling using Object Detectors & Location Prior Maps (물체 탐지기와 위치 사전 확률 지도를 이용한 효율적인 3차원 장면 레이블링)

  • Kim, Joo-Hee;Kim, In-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.996-1002
    • /
    • 2015
  • In this paper, we present an effective system for the 3D scene labeling of objects from RGB-D videos. Our system uses a Markov Random Field (MRF) over a voxel representation of the 3D scene. In order to estimate the correct label of each voxel, the probabilistic graphical model integrates both scores from sliding window-based object detectors and also from object location prior maps. Both the object detectors and the location prior maps are pre-trained from manually labeled RGB-D images. Additionally, the model integrates the scores from considering the geometric constraints between adjacent voxels in the label estimation. We show excellent experimental results for the RGB-D Scenes Dataset built by the University of Washington, in which each indoor scene contains tabletop objects.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.

Educational Indoor Autonomous Mobile Robot System Using a LiDAR and a RGB-D Camera (라이다와 RGB-D 카메라를 이용하는 교육용 실내 자율 주행 로봇 시스템)

  • Lee, Soo-Young;Kim, Jae-Young;Cho, Se-Hyoung;Shin, Chang-yong
    • Journal of IKEEE
    • /
    • v.23 no.1
    • /
    • pp.44-52
    • /
    • 2019
  • We implement an educational indoor autonomous mobile robot system that integrates LiDAR sensing information with RGB-D camera image information and exploits the integrated information. This system uses the existing sensing method employing a LiDAR with a small number of scan channels to acquire LiDAR sensing information. To remedy the weakness of the existing LiDAR sensing method, we propose the 3D structure recognition technique using depth images from a RGB-D camera and the deep learning based object recognition algorithm and apply the proposed technique to the system.

Computational generation method of elemental images using a Kinect sensor in 3D depth-priority integral imaging (3D 깊이우선 집적영상 디스플레이에서의 키넥트 센서를 이용한 컴퓨터적인 요소영상 생성방법)

  • Ryu, Tae-Kyung;Oh, Yongseok;Jeong, Shin-Il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.1
    • /
    • pp.167-174
    • /
    • 2016
  • In this paper, we propose a generation of 2D elemental images for 3D objects using Kinect in 3D depth-priority integral imaging (DPII) display. First, we analyze a principle to pickup elemental images based on ray optics. Based on our analysis, elemental images are generated with both RGB image and depth image recorded from Kinect. We reconstruct 3D images from the elemental images with the computational integral imaging reconstruction technique and then compare various perspective images. To show the usefulness of the proposed method, we carried out the preliminary experiments. The experimental results reveal that our method can provide correct 3D images with full parallax.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

3D Augmented Reality Streaming System Based on a Lamina Display

  • Baek, Hogil;Park, Jinwoo;Kim, Youngrok;Park, Sungwoong;Choi, Hee-Jin;Min, Sung-Wook
    • Current Optics and Photonics
    • /
    • v.5 no.1
    • /
    • pp.32-39
    • /
    • 2021
  • We propose a three-dimensional (3D) streaming system based on a lamina display that can convey field information in real-time by creating floating 3D images that can satisfy the accommodation cue. The proposed system is mainly composed of three parts, namely: a 3D vision camera unit to obtain and provide RGB and depth data in real-time, a 3D image engine unit to realize the 3D volume with a fast response time by using the RGB and depth data, and an optical floating unit to bring the implemented 3D image out of the system and consequently increase the sense of presence. Furthermore, we devise the streaming method required for implementing augmented reality (AR) images by using a multilayered image, and the proposed method for implementing AR 3D video in real-time non-face-to-face communication has been experimentally verified.

High-performance of Deep learning Colorization With Wavelet fusion (웨이블릿 퓨전에 의한 딥러닝 색상화의 성능 향상)

  • Kim, Young-Back;Choi, Hyun;Cho, Joong-Hwee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.6
    • /
    • pp.313-319
    • /
    • 2018
  • We propose a post-processing algorithm to improve the quality of the RGB image generated by deep learning based colorization from the gray-scale image of an infrared camera. Wavelet fusion is used to generate a new luminance component of the RGB image luminance component from the deep learning model and the luminance component of the infrared camera. PSNR is increased for all experimental images by applying the proposed algorithm to RGB images generated by two deep learning models of SegNet and DCGAN. For the SegNet model, the average PSNR is improved by 1.3906dB at level 1 of the Haar wavelet method. For the DCGAN model, PSNR is improved 0.0759dB on the average at level 5 of the Daubechies wavelet method. It is also confirmed that the edge components are emphasized by the post-processing and the visibility is improved.

Design of the 3D Object Recognition System with Hierarchical Feature Learning (계층적 특징 학습을 이용한 3차원 물체 인식 시스템의 설계)

  • Kim, Joohee;Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.13-20
    • /
    • 2016
  • In this paper, we propose an object recognition system that can effectively find out its category, its instance name, and several attributes from the color and depth images of an object with hierarchical feature learning. In the preprocessing stage, our system transforms the depth images of the object into the surface normal vectors, which can represent the shape information of the object more precisely. In the feature learning stage, it extracts a set of patch features and image features from a pair of the color image and the surface normal vector through two-layered learning. And then the system trains a set of independent classification models with a set of labeled feature vectors and the SVM learning algorithm. Through experiments with UW RGB-D Object Dataset, we verify the performance of the proposed object recognition system.