• Title/Summary/Keyword: 3d depth image

Search Result 615, Processing Time 0.022 seconds

A Study of Generating Depth map for 3D Space Structure Recovery

  • Ban, Kyeong-Jin;Kim, Jong-Chan;Kim, Eung-Kon;Kim, Chee-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.12
    • /
    • pp.1855-1862
    • /
    • 2010
  • In virtual reality, there are increasing qualitative development in service technologies for realtime interaction system development, 3- dimensional contents, 3D TV and augment reality services. These services experience difficulties to generate depth value that is essential to recover 3D space to form solidity on existing contents. Hence, research for the generation of effective depth-map using 2D is necessary. This thesis will describe a shortcoming of an existing depth-map generation for the recovery of 3D space using 2D image and will propose an enhanced depth-map generation algorithm that complements a shortcoming of existing algorithms and utilizes the definition of depth direction based on the vanishing point within image.

Stereoscopic Video Compositing with a DSLR and Depth Information by Kinect (키넥트 깊이 정보와 DSLR을 이용한 스테레오스코픽 비디오 합성)

  • Kwon, Soon-Chul;Kang, Won-Young;Jeong, Yeong-Hu;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.10
    • /
    • pp.920-927
    • /
    • 2013
  • Chroma key technique which composes images by separating an object from its background in specific color has restrictions on color and space. Especially, unlike general chroma key technique, image composition for stereo 3D display requires natural image composition method in 3D space. The thesis attempted to compose images in 3D space using depth keying method which uses high resolution depth information. High resolution depth map was obtained through camera calibration between the DSLR and Kinect sensor. 3D mesh model was created by the high resolution depth information and mapped with RGB color value. Object was converted into point cloud type in 3D space after separating it from its background according to depth information. The image in which 3D virtual background and object are composed obtained and played stereo 3D images using a virtual camera.

3D Augmented Reality Streaming System Based on a Lamina Display

  • Baek, Hogil;Park, Jinwoo;Kim, Youngrok;Park, Sungwoong;Choi, Hee-Jin;Min, Sung-Wook
    • Current Optics and Photonics
    • /
    • v.5 no.1
    • /
    • pp.32-39
    • /
    • 2021
  • We propose a three-dimensional (3D) streaming system based on a lamina display that can convey field information in real-time by creating floating 3D images that can satisfy the accommodation cue. The proposed system is mainly composed of three parts, namely: a 3D vision camera unit to obtain and provide RGB and depth data in real-time, a 3D image engine unit to realize the 3D volume with a fast response time by using the RGB and depth data, and an optical floating unit to bring the implemented 3D image out of the system and consequently increase the sense of presence. Furthermore, we devise the streaming method required for implementing augmented reality (AR) images by using a multilayered image, and the proposed method for implementing AR 3D video in real-time non-face-to-face communication has been experimentally verified.

Depth Map Estimation Model Using 3D Feature Volume (3차원 특징볼륨을 이용한 깊이영상 생성 모델)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.11
    • /
    • pp.447-454
    • /
    • 2018
  • This paper proposes a depth image generation algorithm of stereo images using a deep learning model composed of a CNN (convolutional neural network). The proposed algorithm consists of a feature extraction unit which extracts the main features of each parallax image and a depth learning unit which learns the parallax information using extracted features. First, the feature extraction unit extracts a feature map for each parallax image through the Xception module and the ASPP(Atrous spatial pyramid pooling) module, which are composed of 2D CNN layers. Then, the feature map for each parallax is accumulated in 3D form according to the time difference and the depth image is estimated after passing through the depth learning unit for learning the depth estimation weight through 3D CNN. The proposed algorithm estimates the depth of object region more accurately than other algorithms.

3D Image Construction Using Color and Depth Cameras (색상과 깊이 카메라를 이용한 3차원 영상 구성)

  • Jung, Ha-Hyoung;Kim, Tae-Yeon;Lyou, Joon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.1-7
    • /
    • 2012
  • This paper presents a method for 3D image construction using the hybrid (color and depth) camera system, in which the drawbacks of each camera can be compensated for. Prior to an image generation, intrinsic parameters and extrinsic parameters of each camera are extracted through experiments. The geometry between two cameras is established with theses parameters so as to match the color and depth images. After the preprocessing step, the relation between depth information and distance is derived experimentally as a simple linear function, and 3D image is constructed by coordinate transformations of the matched images. The present scheme has been realized using the Microsoft hybrid camera system named Kinect, and experimental results of 3D image and the distance measurements are given to evaluate the method.

3D Fingertip Estimation based on the TOF Camera for Virtual Touch Screen System (가상 터치스크린 시스템을 위한 TOF 카메라 기반 3차원 손 끝 추정)

  • Kim, Min-Wook;Ahn, Yang-Keun;Jung, Kwang-Mo;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.287-294
    • /
    • 2010
  • TOF technique is one of the skills that can obtain the object's 3D depth information. But depth image has low resolution and fingertip occupy very small region, so, it is difficult to find the precise fingertip's 3D information by only using depth image from TOF camera. In this paper, we estimate fingertip's 3D location using Arm Model and reliable hand's 3D location information that is modified by hexahedron as hand model. Using proposed method we can obtain more precise fingertip's 3D information than using only depth image.

Foreground Extraction and Depth Map Creation Method based on Analyzing Focus/Defocus for 2D/3D Video Conversion (2D/3D 동영상 변환을 위한 초점/비초점 분석 기반의 전경 영역 추출과 깊이 정보 생성 기법)

  • Han, Hyun-Ho;Chung, Gye-Dong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.1
    • /
    • pp.243-248
    • /
    • 2013
  • In this paper, depth of foreground is analysed by focus and color analysis grouping for 2D/3D video conversion and depth of foreground progressing method is preposed by using focus and motion information. Candidate foreground image is generated by estimated movement of image focus information for extracting foreground from 2D video. Area of foreground is extracted by filling progress using color analysis on hole area of inner object existing candidate foreground image. Depth information is generated by analysing value of focus existing on actual frame for allocating depth at generated foreground area. Depth information is allocated by weighting motion information. Results of previous proposed algorithm is compared with proposed method from this paper for evaluating the quality of generated depth information.

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

Conversion Method of 3D Point Cloud to Depth Image and Its Hardware Implementation (3차원 점군데이터의 깊이 영상 변환 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Jo, Gippeum;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2443-2450
    • /
    • 2014
  • In the motion recognition system using depth image, the depth image is converted to the real world formed 3D point cloud data for efficient algorithm apply. And then, output depth image is converted by the projective world after algorithm apply. However, when coordinate conversion, rounding error and data loss by applied algorithm are occurred. In this paper, when convert 3D point cloud data to depth image, we proposed efficient conversion method and its hardware implementation without rounding error and data loss according image size change. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

Improvement of 3D Stereoscopic Perception Using Depth Map Transformation (깊이맵 변환을 이용한 3D 입체감 개선 방법)

  • Jang, Seong-Eun;Jung, Da-Un;Seo, Joo-Ha;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.916-926
    • /
    • 2011
  • It is well known that high-resolution 3D movie contents frequently do not deliver the identical 3D perception to low-resolution 3D images. For solving this problem, we propose a novel method that produces a new stereoscopic image based on depth map transformation using the spatial complexity of an image. After analyzing the depth map histogram, the depth map is decomposed into multiple depth planes that are transformed based upon the spatial complexity. The transformed depth planes are composited into a new depth map. Experimental results demonstrate that the lower the spatial complexity is, the higher the perceived video quality and depth perception are. As well, visual fatigue test showed that the stereoscopic images deliver less visual fatigue.