• Title/Summary/Keyword: Depth Map Image

Search Result 299, Processing Time 0.021 seconds

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

Bokeh Effect Algorithm using Defocus Map in Single Image (단일 영상에서 디포커스 맵을 활용한 보케 효과 알고리즘)

  • Lee, Yong-Hwan;Kim, Heung Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.87-91
    • /
    • 2022
  • Bokeh effect is a stylistic technique that can produce blurring the background of photos. This paper implements to produce a bokeh effect with a single image by post processing. Generating depth map is a key process of bokeh effect, and depth map is an image that contains information relating to the distance of the surfaces of scene objects from a viewpoint. First, this work presents algorithms to determine the depth map from a single input image. Then, we obtain a sparse defocus map with gradient ratio from input image and blurred image. Defocus map is obtained by propagating threshold values from edges using matting Laplacian. Finally, we obtain the blurred image on foreground and background segmentation with bokeh effect achieved. With the experimental results, an efficient image processing method with bokeh effect applied using a single image is presented.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Depth Map Generation Algorithm from Single Defocused Image (흐린 초점의 단일영상에서 깊이맵 생성 알고리즘)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.3
    • /
    • pp.67-71
    • /
    • 2016
  • This paper addresses a problem of defocus map recovery from single image. We describe a simple effective approach to estimate the spatial value of defocus blur at the edge location of the image. At first, we perform a re-blurring process using Gaussian function with input image, and calculate a gradient magnitude ratio with blurring amount between input image and re-blurred image. Then we get a full defocus map by propagating the blur amount at the edge location. Experimental result reveals that our method outperforms a reliable estimation of depth map, and shows that our algorithm is robust to noise, inaccurate edge location and interferences of neighboring edges within input image.

Enhancing Depth Measurements in Depth From Focus based on Mutual Structures (상호 구조에 기반한 초점으로부터의 깊이 측정 방법 개선)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.17-21
    • /
    • 2022
  • A variety of techniques have been proposed in the literature for depth improvement in depth from focus method. Unfortunately, these techniques over-smooth the depth maps over the regions of depth discontinuities. In this paper, we propose a robust technique for improving the depth map by employing a nonconvex smoothness function that preserves the depth edges. In addition, the proposed technique exploits the mutual structures between the depth map and a guidance map. This guidance map is designed by taking the mean of image intensities in the image sequence. The depth map is updated iteratively till the nonconvex objective function converges. Experiments performed on real complex image sequences revealed the effectiveness of the proposed technique.

A Study on 2D/3D image Conversion Method using Create Depth Map (2D/3D 변환을 위한 깊이정보 생성기법에 관한 연구)

  • Han, Hyeon-Ho;Lee, Gang-Seong;Lee, Sang-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.4
    • /
    • pp.1897-1903
    • /
    • 2011
  • This paper discusses a 2D/3D conversion of images using technologies like object extraction and depth-map creation. The general procedure for converting 2D images into a 3D image is extracting objects from 2D image, recognizing the distance of each points, generating the 3D image and correcting the image to generate with less noise. This paper proposes modified new methods creating a depth-map from 2D image and recognizing the distance of objects in it. Depth-map information which determines the distance of objects is the key data creating a 3D image from 2D images. To get more accurate depth-map data, noise filtering is applied to the optical flow. With the proposed method, better depth-map information is calculated and better 3D image is constructed.

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

Implementing a Depth Map Generation Algorithm by Convolutional Neural Network (깊이맵 생성 알고리즘의 합성곱 신경망 구현)

  • Lee, Seungsoo;Kim, Hong Jin;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.1
    • /
    • pp.3-10
    • /
    • 2018
  • Depth map has been utilized in a varity of fields. Recently research on generating depth map by artificial neural network (ANN) has gained much interest. This paper validates the feasibility of implementing the ready-made depth map generation by convolutional neural network (CNN). First, for a given image, a depth map is generated by the weighted average of a saliency map as well as a motion history image. Then CNN network is trained by test images and depth maps. The objective and subjective experiments are performed on the CNN and showed that the CNN can replace the ready-made depth generation method.

Real-Time Stereoscopic Image Conversion Using Motion Detection and Region Segmentation (움직임 검출과 영역 분할을 이용한 실시간 입체 영상 변환)

  • Kwon Byong-Heon;Seo Burm-suk
    • Journal of Digital Contents Society
    • /
    • v.6 no.3
    • /
    • pp.157-162
    • /
    • 2005
  • In this paper we propose real-time cocersion methods that can convert into stereoscopic image using depth map that is formed by motion detection extracted from 2-D moving image and region segmentation separated from image. Depth map which represents depth information of image and the proposed absolute parallax image are used as the measure of qualitative evaluation. We have compared depth information, parallax processing, and segmentation between objects with different depth for proposed and conventional method. As a result, we have confirmed the proposed method can offer realistic stereoscopic effect regardless of direction and velocity of moving object for a moving image.

  • PDF

2D/3D conversion method using depth map based on haze and relative height cue (실안개와 상대적 높이 단서 기반의 깊이 지도를 이용한 2D/3D 변환 기법)

  • Han, Sung-Ho;Kim, Yo-Sup;Lee, Jong-Yong;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.10 no.9
    • /
    • pp.351-356
    • /
    • 2012
  • This paper presents the 2D/3D conversion technique using depth map which is generated based on the haze and relative height cue. In cases that only the conventional haze information is used, errors in image without haze could be generated. To reduce this kind of errors, a new approach is proposed combining the haze information with depth map which is constructed based on the relative height cue. Also the gray scale image from Mean Shift Segmentation is combined with depth map of haze information to sharpen the object's contour lines, upgrading the quality of 3D image. Left and right view images are generated by DIBR(Depth Image Based Rendering) using input image and final depth map. The left and right images are used to generate red-cyan 3D image and the result is verified by measuring PSNR between the depth maps.