• Title/Summary/Keyword: 3D Depth Image

Search Result 615, Processing Time 0.025 seconds

A Study on Create Depth Map using Focus/Defocus in single frame (단일 프레임 영상에서 초점을 이용한 깊이정보 생성에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.4
    • /
    • pp.191-197
    • /
    • 2012
  • In this paper we present creating 3D image from 2D image by extract initial depth values calculated from focal values. The initial depth values are created by using the extracted focal information, which is calculated by the comparison of original image and Gaussian filtered image. This initial depth information is allocated to the object segments obtained from normalized cut technique. Then the depth of the objects are corrected to the average of depth values in the objects so that the single object can have the same depth. The generated depth is used to convert to 3D image using DIBR(Depth Image Based Rendering) and the generated 3D image is compared to the images generated by other techniques.

3D Printing Based Patient-specific Orbital Implant Design and Production by Using A Depth Image (깊이 영상을 이용한 3D 프린팅 기반 환자 맞춤형 안와 임플란트의 설계 및 제작)

  • Seo, Udeok;Kim, Ku-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.903-914
    • /
    • 2020
  • In this paper, we present a novel algorithm to generate a 3D model of patient-specific orbital implant, which is finally produced by the 3D printer. Given CT (computed tomography) scan data of the defective orbital wall or floor, we compose the depth image of the defect site by using the depth buffering, which is a computer graphics technology. From the depth image, we compute the 3D surface which fills the broken part by interpolating the points around the broken part. By thickening the 3D surface, we get the 3D volume mesh of the orbital implant. Our algorithm generates the patient-specific orbital implant whose shape is accurately coincident to the broken part of the orbit. It provides the significant time efficiency for manufacturing the implant with supporting high user convenience.

Optimized Multiple Description Lattice Vector Quantization Coding for 3D Depth Image

  • Zhang, Huiwen;Bai, Huihui;Liu, Meiqin;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1140-1154
    • /
    • 2015
  • Multiple Description (MD) coding is a promising alternative for the robust transmission of information over error-prone channels. Lattice vector quantization (LVQ) is a significant version of MD techniques to design an MD image coder. However, different from the traditional 2D texture image, the 3D depth image has its own special characteristics, which should be taken into account for efficient compression. In this paper, an optimized MDLVQ scheme is proposed in view of the characteristics of 3D depth image. First, due to the sparsity of depth image, the image blocks can be classified into edge blocks and smooth blocks, which are encoded by different modes. Furthermore, according to the boundary contents in edge blocks, the step size of LVQ can be regulated adaptively for each block. Experimental results validate the effectiveness of the proposed scheme, which show better rate distortion performance compared with the conventional MDLVQ.

2D to 3D Anaglyph Image Conversion using Linear Curve in HTML5 (HTML5에서 직선의 기울기를 이용한 2D to 3D 입체 이미지 변환)

  • Park, Young Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.521-528
    • /
    • 2014
  • In this paper, we propose the method of converting 2D image to 3D image using linear curves in HTML5. We use only one image without any other information about depth map for creating 3D images. So we filter the original image to extract RGB colors for left and right eyes. After selecting the ready-made control point of linear curves to set up depth values, users can set up the depth values and modify them. Based on the depth values that the end users select, we reflect them. Anaglyph 3D is automatically made with the whole and partial depth information. As all of this work has been designed and implemented in Web environment using HTML5, it is very easy and convenient and end users can create any 3D image that they want to make.

Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration (GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM)

  • Lee, Donghwa;Kim, Hyongjin;Myung, Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.5
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

RAY-SPACE INTERPOLATION BYWARPING DISPARITY MAPS

  • Moriy, Yuji;Yendoy, Tomohiro;Tanimotoy, Masayuki;Fujiiz, Toshiaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.583-587
    • /
    • 2009
  • In this paper we propose a new method of Depth-Image-Based Rendering (DIBR) for Free-viewpoint TV (FTV). In the proposed method, virtual viewpoint images are rendered with 3D warping instead of estimating the view-dependent depth since depth estimation is usually costly and it is desirable to eliminate it from the rendering process. However, 3D warping causes some problems that do not occur in the method with view-dependent depth estimation; for example, the appearance of holes on the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane. Depth discontinuity causes artifacts on the rendered image. In this paper, these problems are solved by reconstructing disparity information at virtual camera position from neighboring two real cameras. In the experiments, high quality arbitrary viewpoint images were obtained.

  • PDF

Design and Implementation of High-Resolution Integral Imaging Display System using Expanded Depth Image

  • Song, Min-Ho;Lim, Byung-Muk;Ryu, Ga-A;Ha, Jong-Sung;Yoo, Kwan-Hee
    • International Journal of Contents
    • /
    • v.14 no.3
    • /
    • pp.1-6
    • /
    • 2018
  • For 3D display applications, auto-stereoscopic display methods that can provide 3D images without glasses have been actively developed. This paper is concerned with developing a display system for elemental images of real space using integral imaging. Unlike the conventional method, which reduces a color image to the level as much as a generated depth image does, we have minimized original color image data loss by generating an enlarged depth image with interpolation methods. Our method was efficiently implemented by applying a GPU parallel processing technique with OpenCL to rapidly generate a large amount of elemental image data. We also obtained experimental results for displaying higher quality integral imaging rather than one generated by previous methods.

Applying differential techniques for 2D/3D video conversion to the objects grouped by depth information (2D/3D 동영상 변환을 위한 그룹화된 객체별 깊이 정보의 차등 적용 기법)

  • Han, Sung-Ho;Hong, Yeong-Pyo;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.3
    • /
    • pp.1302-1309
    • /
    • 2012
  • In this paper, we propose applying differential techniques for 2D/3D video conversion to the objects grouped by depth information. One of the problems converting 2D images to 3D images using the technique tracking the motion of pixels is that objects not moving between adjacent frames do not give any depth information. This problem can be solved by applying relative height cue only to the objects which have no moving information between frames, after the process of splitting the background and objects and extracting depth information using motion vectors between objects. Using this technique all the background and object can have their own depth information. This proposed method is used to generate depth map to generate 3D images using DIBR(Depth Image Based Rendering) and verified that the objects which have no movement between frames also had depth information.

Recent Technologies for the Acquisition and Processing of 3D Images Based on Deep Learning (딥러닝기반 입체 영상의 획득 및 처리 기술 동향)

  • Yoon, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.112-122
    • /
    • 2020
  • In 3D computer graphics, a depth map is an image that provides information related to the distance from the viewpoint to the subject's surface. Stereo sensors, depth cameras, and imaging systems using an active illumination system and a time-resolved detector can perform accurate depth measurements with their own light sources. The 3D image information obtained through the depth map is useful in 3D modeling, autonomous vehicle navigation, object recognition and remote gesture detection, resolution-enhanced medical images, aviation and defense technology, and robotics. In addition, the depth map information is important data used for extracting and restoring multi-view images, and extracting phase information required for digital hologram synthesis. This study is oriented toward a recent research trend in deep learning-based 3D data analysis methods and depth map information extraction technology using a convolutional neural network. Further, the study focuses on 3D image processing technology related to digital hologram and multi-view image extraction/reconstruction, which are becoming more popular as the computing power of hardware rapidly increases.