• Title/Summary/Keyword: RGBD images

Search Result 5, Processing Time 0.02 seconds

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

Transformations and Their Analysis from a RGBD Image to Elemental Image Array for 3D Integral Imaging and Coding

  • Yoo, Hoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2273-2286
    • /
    • 2018
  • This paper describes transformations between elemental image arrays and a RGBD image for three-dimensional integral imaging and transmitting systems. Two transformations are introduced and analyzed in the proposed method. Normally, a RGBD image is utilized in efficient 3D data transmission although 3D imaging and display is restricted. Thus, a pixel-to-pixel mapping is required to obtain an elemental image array from a RGBD image. However, transformations and their analysis have little attention in computational integral imaging and transmission. Thus, in this paper, we introduce two different mapping methods that are called as the forward and backward mapping methods. Also, two mappings are analyzed and compared in terms of complexity and visual quality. In addition, a special condition, named as the hole-free condition in this paper, is proposed to understand the methods analytically. To verify our analysis, we carry out experiments for test images and the results indicate that the proposed methods and their analysis work in terms of the computational cost and visual quality.

Stencil-based 3D facial relief creation from RGBD images for 3D printing

  • Jung, Soonchul;Choi, Yoon-Seok;Kim, Jin-Seo
    • ETRI Journal
    • /
    • v.42 no.2
    • /
    • pp.272-281
    • /
    • 2020
  • Three-dimensional (3D) selfie services, one of the major 3D printing services, print 3D models of an individual's face via scanning. However, most of these services require expensive full-color supporting 3D printers. The high cost of such printers poses a challenge in launching a variety of 3D printing application services. This paper presents a stencil-based 3D facial relief creation method employing a low-cost RGBD sensor and a 3D printer. Stencil-based 3D facial relief is an artwork in which some parts are holes, similar to that in a stencil, and other parts stand out, as in a relief. The proposed method creates a new type of relief by combining the existing stencil techniques and relief techniques. As a result, the 3D printed product resembles a two-colored object rather than a one-colored object even when a monochrome 3D printer is used. Unlike existing personalization-based 3D printing services, the proposed method enables the printing and delivery of products to customers in a short period of time. Experimental results reveal that, compared to existing 3D selfie products printed by monochrome 3D printers, our products have a higher degree of similarity and are more profitable.

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.

Hybrid Model Representation for Progressive Indoor Scene Reconstruction (실내공간의 점진적 복원을 위한 하이브리드 모델 표현)

  • Jung, Jinwoong;Jeon, Junho;Yoo, Daehoon;Lee, Seungyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.5
    • /
    • pp.37-44
    • /
    • 2015
  • This paper presents a novel 3D model representation, called hybrid model representation, to overcome existing 3D volume-based indoor scene reconstruction mechanism. In indoor 3D scene reconstruction, volume-based model representation can reconstruct detailed 3D model for the narrow scene. However it cannot reconstruct large-scale indoor scene due to its memory consumption. This paper presents a memory efficient plane-hash model representation to enlarge the scalability of the indoor scene reconstruction. Also, the proposed method uses plane-hash model representation to reconstruct large, structural planar objects, and at the same time it uses volume-based model representation to recover small detailed region. Proposed method can be implemented in GPU to accelerate the computation and reconstruct the indoor scene in real-time.