• Title, Summary, Keyword: 깊이

Search Result 6,837, Processing Time 0.051 seconds

Joint Bilateral Upsampling using Variance (분산 값을 이용한 결합 양측 업샘플링)

  • Lee, Dong-Woo;Kim, Manbae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.398-400
    • /
    • 2012
  • 최근 3D에 대한 관심이 집중되면서 고품질의 3D영상을 얻기 위해 고품질의 깊이 영상이 필요하고 이를 구현하기 위한 연구가 활발히 진행되고 있다. 깊이 영상을 얻기 위해서 Time-of-Flight(ToF)방식의 깊이 센서가 활용되고 있는데 이 깊이 센서는 실시간으로 깊이 정보를 획득할 수 있지만 낮은 해상도와 노이즈가 발생한다는 단점이 있다. 따라서 깊이 센서로 생성된 저해상도 깊이맵을 고해상도로 변환해야 한다. 주로 깊이 영상의 해상도를 높이기 위해서 Joint Bilateral Upsampling(JBU) 방식이 사용되고 있다. 따라서 본 논문은 JBU 방식을 보강하여 블록단위로 분산에 따른 참조 영상의 가중치를 다르게 두어 깊이 영상의 품질을 향상시키는 방법을 제안한다.

  • PDF

3D-HEVC Deblocking filter for Depth Video Coding (3D-HEVC 디블록킹 필터를 이용한 깊이 비디오 부호화)

  • Song, Yunseok;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.464-465
    • /
    • 2015
  • 본 논문은 HEVC(High Efficiency Video Coding) 기반의 3차원 비디오 부호기에서 깊이 비디오 부호화의 효율 증대를 위한 디블록킹 필터(deblocking filter)를 제안한다. 디블록킹 필터는 블록 왜곡(blocking artifact)을 보정하기 위한 필터인데 원래 색상 영상의 특성에 맞게 설계되어서 비슷한 목적을 지닌 SAO(Sample Adaptive Offset)와 더불어 기존 방법의 깊이 비디오 부호화에서는 사용되지 않는다. 제안 방법은 디블록킹 필터의 사전 실험 통계에 기반하여 기여도가 낮은 normal 필터를 제외시킨다. 또한, 깊이 비디오의 특성을 고려하여 임펄스 응답(impulse response)를 변형하였다. 이 변형된 디블록킹 필터를 깊이 비디오 부호화에만 적용하고 색상 비디오 부호화에는 기존 디블록킹 필터를 사용하였다. 3D-HTM(HEVC Test Model) 13.0 참조 소프트웨어에 구현하여 실험한 결과, 기존 방법에 비해 깊이 비디오 부호화 성능이 5.2% 향상되었다. 색상-깊이 비디오 간 참조가 있기 때문에 변형된 깊이 비디오 부호화가 색상 비디오 부호화 효율에 영향을 끼칠 수도 있지만 실험 결과 색상 비디오 부호화 성능은 유지되었다. 따라서 제안 방법은 성공적으로 깊이 비디오 부호화의 효율을 증대시켰다.

  • PDF

High-qualtiy 3-D Video Generation using Scale Space (계위 공간을 이용한 고품질 3차원 비디오 생성 방법 -다단계 계위공간 개념을 이용해 깊이맵의 경계영역을 정제하는 고화질 복합형 카메라 시스템과 고품질 3차원 스캐너를 결합하여 고품질 깊이맵을 생성하는 방법-)

  • Lee, Eun-Kyung;Jung, Young-Ki;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • /
    • pp.620-624
    • /
    • 2009
  • In this paper, we present a new camera system combining a high-quality 3-D scanner and hybrid camera system to generate a multiview video-plus-depth. In order to get the 3-D video using the hybrid camera system and 3-D scanner, we first obtain depth information for background region from the 3-D scanner. Then, we get the depth map for foreground area from the hybrid camera system. Initial depths of each view image are estimated by performing 3-D warping with the depth information. Thereafter, multiview depth estimation using the initial depths is carried out to get each view initial disparity map. We correct the initial disparity map using a belief propagation algorithm so that we can generate the high-quality multiview disparity map. Finally, we refine depths of the foreground boundary using extracted edge information. Experimental results show that the proposed depth maps generation method produces a 3-D video with more accurate multiview depths and supports more natural 3-D views than the previous works.

  • PDF

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Stereoscopic Depth from 3D Contents with Various Disparity (화면 시차로부터 지각되는 3D 컨텐츠의 입체시 깊이)

  • Kham, Keetaek
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.76-86
    • /
    • 2016
  • This study was investigated whether the perceived depth was changed depending on the measurement methods. In the method of direct comparison, virtual object with one of the various binocular disparities was presented in the frontal space with LEDs which were used for depth estimation for a binocular stimulus, while in the method of indirect comparison, visual object was presented in the frontal space but the LEDs were placed rightward at the angle of 45 degree from the mid-sagittal line. In these experimental setup, the depth of binocular stimulus was directly matched that of LED in direct comparison condition. In indirect comparison condition, however, observer estimated the depth of binocular stimulus, turned one's head rightward to the array of LEDs and turned on the LED which was supposed to be the same depth as binocular stimulus. Additionally, it was investigated whether the perceived depth was different depending on observer's stereo acuity. The results showed that perceived depths measured in the direct comparison were more similar to the depth predicted from geometry than those in the indirect comparison, and that the perceived depths from observers with high stereo acuity were similar to the predicted depth from geometry those from observers with low stereo acuity. These results indicated that stereoscopic depths of the binocular stimuli would vivid and compelling when binocular stimuli was simultaneously presented with real objects in the same visual space, like a mixed reality.

Intermediate View Synthesis Method using Kinect Depth Camera (Kinect 깊이 카메라를 이용한 가상시점 영상생성 기술)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.29-35
    • /
    • 2012
  • A depth image-based rendering (DIBR) technique is one of the rendering processes of virtual views with a color image and the corresponding depth map. The most important issue of DIBR is that the virtual view has no information at newly exposed areas, so called dis-occlusion. In this paper, we propose an intermediate view generation algorithm using the Kinect depth camera that utilizes the infrared structured light. After we capture a color image and its corresponding depth map, we pre-process the depth map. The pre-processed depth map is warped to the virtual viewpoint and filtered by median filtering to reduce the truncation error. Then, the color image is back-projected to the virtual viewpoint using the warped depth map. In order to fill out the remaining holes caused by dis-occlusion, we perform a background-based image in-painting operation. Finally, we obtain the synthesized image without any dis-occlusion. From experimental results, we have shown that the proposed algorithm generated very natural images in real-time.

  • PDF

Depth Interpolation Method using Random Walk Probability Model (랜덤워크 확률 모델을 이용한 깊이 영상 보간 방법)

  • Lee, Gyo-Yoon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.738-743
    • /
    • 2011
  • For the high quality 3-D broadcasting, depth maps are important data. Although commercially available depth cameras capture high-accuracy depth maps in real time, their resolutions are much smaller than those of the corresponding color images due to technical limitations. In this paper, we propose the depth map up-sampling method using a high-resolution color image and a low-resolution depth map. We define a random walk probability model in an operation unit which has nearest seed pixels. The proposed method is appropriate to match boundaries between the color image and the depth map. Experimental results show that our method enhances the depth map resolution successfully.

Kinect depth map enhancement using boundary flickering compensation (경계 흔들림 보정을 이용한 키넥트 깊이 영상의 품질 향상 기법)

  • Lee, Gyucheol;Kwon, Soonchan;Lim, Jongmyeong;Han, Jaeyoung;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • /
    • pp.25-28
    • /
    • 2012
  • 본 논문에서는 키넥트(Kinect)에서 획득한 깊이 영상의 품질을 향상시키는 기법을 제안한다. 키넥트는 마이크로소프트사에서 출시한 카메라로 깊이 영상과 컬러 영상을 획득 할 수 있다. 하지만 적외선 패턴을 이용한 깊이 영상의 획득 방법의 한계로 인해 객체의 경계 주변으로 홀 및 잡음이 생긴다. 따라서 정확한 깊이 영상을 얻기 위해서는 깊이 영상의 품질 향상이 필수적이다. 일반적으로 깊이 영상의 홀을 채울 때, 인페인팅(inpainting) 또는 결합형 양방향 필터(joint bilateral filter) 등의 기법을 사용한다. 그러나 이러한 기법들의 경우 한 장의 영상만을 이용하기 때문에 객체 경계 주변의 흔들림 현상을 보정할 수 없다. 제안하는 기법에서는 먼저 수행속도가 빠른 가우시안 필터를 이용하여 경계 주변의 홀을 채운다. 이전 프레임의 컬러 영상을 그레이 영상으로 변환한 다음에 그레이 영상과 깊이 영상의 값의 변화를 분석하여 흔들림 화소를 찾아 이전 깊이 영상들 중 최대 화소 값으로 변환함으로써 깊이 영상의 경계 흔들림 현상을 줄일 수 있다. 실험을 통해 제안하는 기법이 기존의 방법들 보다 우수하다는 것을 확인하였다.

  • PDF

Depth Acquisition Techniques for 3D Contents Generation (3차원 콘텐츠 제작을 위한 깊이 정보 획득 기술)

  • Jang, Woo-Seok;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.15-21
    • /
    • 2012
  • Depth information is necessary for various three dimensional contents generation. Depth acquisition techniques can be categorized broadly into two approaches: active, passive depth sensors depending on how to obtain depth information. In this paper, we take a look at several ways of depth acquirement. We present not only depth acquisition methods using discussed ways, but also hybrid methods which combine both approaches to compensate for drawbacks of each approach. Furthermore, we introduce several matching cost functions and post-processing techniques to enhance the temporal consistency and reduce flickering artifacts and discomforts of users caused by inaccurate depth estimation in 3D video.

  • PDF