• Title/Summary/Keyword: Joint Bilateral Filter

Search Result 17, Processing Time 0.024 seconds

Temporally-Consistent High-Resolution Depth Video Generation in Background Region (배경 영역의 시간적 일관성이 향상된 고해상도 깊이 동영상 생성 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.414-420
    • /
    • 2015
  • The quality of depth images is important in the 3D video system to represent complete 3D contents. However, the original depth image from a depth camera has a low resolution and a flickering problem which shows vibrating depth values in terms of temporal meaning. This problem causes an uncomfortable feeling when we look 3D contents. In order to solve a low resolution problem, we employ 3D warping and a depth weighted joint bilateral filter. A temporal mean filter can be applied to solve the flickering problem while we encounter a residual spectrum problem in the depth image. Thus, after classifying foreground andbackground regions, we use an upsampled depth image for a foreground region and temporal mean image for background region.Test results shows that the proposed method generates a time consistent depth video with a high resolution.

Depth map temporal consistency compensation using motion estimation (움직임 추정을 통한 깊이 지도의 시간적 일관성 보상 기법)

  • Hyun, Jeeho;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.2
    • /
    • pp.438-446
    • /
    • 2013
  • Generally, a camera isn't located at the center of display in a tele-presence system and it causes an incorrect eye contact between speakers which reduce the realistic feeling during the conversation. To solve this incorrect eye contact problem, we newly propose an intermediate view reconstruction algorithm using both a color camera and a depth camera and applying for the depth image based rendering (DIBR) algorithm. In the proposed algorithm, an efficient hole filling method using the arithmetic mean value of neighbor pixels and an efficient boundary noise removal method by expanding the edge region of depth image are included. We show that the generated eye-contacted image has good quality through experiments.

Real-Time Virtual-View Image Synthesis Algorithm Using Kinect Camera (키넥트 카메라를 이용한 실시간 가상 시점 영상 생성 기법)

  • Lee, Gyu-Cheol;Yoo, Jisang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.5
    • /
    • pp.409-419
    • /
    • 2013
  • Kinect released by Microsoft in November 2010 is a motion sensing camera in xbox360 and gives depth and color images. However, Kinect camera also generates holes and noise around object boundaries in the obtained images because it uses infrared pattern. Also, boundary flickering phenomenon occurs. Therefore, we propose a real-time virtual-view video synthesis algorithm which results in a high-quality virtual view by solving these problems. In the proposed algorithm, holes around the boundary are filled by using the joint bilateral filter. Color image is converted into intensity image and then flickering pixels are searched by analyzing the variation of intensity and depth images. Finally, boundary flickering phenomenon can be reduced by converting values of flickering pixels into the maximum pixel value of a previous depth image and virtual views are generated by applying 3D warping technique. Holes existing on regions that are not part of occlusion region are also filled with a center pixel value of the highest reliability block after the final block reliability is calculated by using a block based gradient searching algorithm with block reliability. The experimental results show that the proposed algorithm generated the virtual view image in real-time.

implementation of 3D Reconstruction using Multiple Kinect Cameras (다수의 Kinect 카메라를 이용한 3차원 객체 복원 구현)

  • Shin, Dong Won;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.22-27
    • /
    • 2014
  • Three-dimensional image reconstruction allows us to represent real objects in the virtual space and observe the objects at arbitrary view points. This technique can be used in various application areas such as education, culture, and art. In this paper, we propose an implementation method of the high-quality three-dimensional object using multiple Kinect cameras released from Microsoft. First, We acquire color and depth images from triple Kinect cameras; Kinect cameras are placed in front of the object as a convergence form. Because original depth image includes some areas where have no depth values, we employ joint bilateral filter to refine these areas. In addition to the depth image problem, there is an color mismatch problem in color images of multiview system. In order to solve it, we exploit an color correction method using three-dimensional geometry. Through the experimental results, we found that three-dimensional object which is used the proposed method is more naturally represented than the original three-dimensional object in terms of the color and shape.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Depth Upsampling Method Using Total Generalized Variation (일반적 총변이를 이용한 깊이맵 업샘플링 방법)

  • Hong, Su-Min;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.957-964
    • /
    • 2016
  • Acquisition of reliable depth maps is a critical requirement in many applications such as 3D videos and free-viewpoint TV. Depth information can be obtained from the object directly using physical sensors, such as infrared ray (IR) sensors. Recently, Time-of-Flight (ToF) range camera including KINECT depth camera became popular alternatives for dense depth sensing. Although ToF cameras can capture depth information for object in real time, but are noisy and subject to low resolutions. Recently, filter-based depth up-sampling algorithms such as joint bilateral upsampling (JBU) and noise-aware filter for depth up-sampling (NAFDU) have been proposed to get high quality depth information. However, these methods often lead to texture copying in the upsampled depth map. To overcome this limitation, we formulate a convex optimization problem using higher order regularization for depth map upsampling. We decrease the texture copying problem of the upsampled depth map by using edge weighting term that chosen by the edge information. Experimental results have shown that our scheme produced more reliable depth maps compared with previous methods.

A Robust Depth Map Upsampling Against Camera Calibration Errors (카메라 보정 오류에 강건한 깊이맵 업샘플링 기술)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.8-17
    • /
    • 2011
  • Recently, fusion camera systems that consist of depth sensors and color cameras have been widely developed with the advent of a new type of sensor, time-of-flight (TOF) depth sensor. The physical limitation of depth sensors usually generates low resolution images compared to corresponding color images. Therefore, the pre-processing module, such as camera calibration, three dimensional warping, and hole filling, is necessary to generate the high resolution depth map that is placed in the image plane of the color image. However, the result of the pre-processing step is usually inaccurate due to errors from the camera calibration and the depth measurement. Therefore, in this paper, we present a depth map upsampling method robust these errors. First, the confidence of the measured depth value is estimated by the interrelation between the color image and the pre-upsampled depth map. Then, the detailed depth map can be generated by the modified kernel regression method which exclude depth values having low confidence. Our proposed algorithm guarantees the high quality result in the presence of the camera calibration errors. Experimental comparison with other data fusion techniques shows the superiority of our proposed method.