• Title/Summary/Keyword: depth-map

Search Result 814, Processing Time 0.028 seconds

Multi-view Synthesis Algorithm for the Better Efficiency of Codec (부복호화기 효율을 고려한 다시점 영상 합성 기법)

  • Choi, In-kyu;Cheong, Won-sik;Lee, Gwangsoon;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.375-384
    • /
    • 2016
  • In this paper, when stereo image, satellite view and corresponding depth maps were used as the input data, we propose a new method that convert these data to data format suitable for compressing, and then by using these format, intermediate view is synthesized. In the transmitter depth maps are merged to a global depth map and satellite view are converted to residual image corresponding hole region as out of frame area and occlusion region. And these images subsampled to reduce a mount of data and stereo image of main view are encoded by HEVC codec and transmitted. In the receiver intermediate views between stereo image and between stereo image and bit-rate are synthesized using decoded global depth map, residual images and stereo image. Through experiments, we confirm good quality of intermediate views synthesized by proposed format subjectively and objectively in comparison to intermediate views synthesized by MVD format versus total bit-rate.

Efficient Depth Map Generation for Various Stereo Camera Arrangements (다양한 스테레오 카메라 배열을 위한 효율적인 깊이 지도 생성 방법)

  • Jang, Woo-Seok;Lee, Cheon;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.6A
    • /
    • pp.458-463
    • /
    • 2012
  • In this paper, we propose a direct depth map acquisition method for the convergence camera array as well as the parallel camera array. The conventional methods perform image rectification to reduce complexity and improve accuarcy. However, image rectification may lead to unwanted consequences for the convergence camera array. Thus, the proposed method excludes image rectification and directly extracts depth values using the epipolar constraint. In order to acquire a more accurate depth map, occlusion detection and handling processes are added. Reasonable depth values are assigned to the obtained occlusion region by the distance and color differences from neighboring pixels. Experimental results show that the proposed method has fewer limitations than the conventional methods and generates more accurate depth maps stably.

View Synthesis Using OpenGL for Multi-viewpoint 3D TV (다시점 3차원 방송을 위한 OpenGL을 이용하는 중간영상 생성)

  • Lee, Hyun-Jung;Hur, Nam-Ho;Seo, Yong-Duek
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.507-520
    • /
    • 2006
  • In this paper, we propose an application of OpenGL functions for novel view synthesis from multi-view images and depth maps. While image based rendering has been meant to generate synthetic images by processing the camera view with a graphic engine, little has been known about how to apply the given images and depth information to the graphic engine and render the scene. This paper presents an efficient way of constructing a 3D space with camera parameters, reconstructing the 3D scene with color and depth images, and synthesizing virtual views in real-time as well as their depth images.

CAttNet: A Compound Attention Network for Depth Estimation of Light Field Images

  • Dingkang Hua;Qian Zhang;Wan Liao;Bin Wang;Tao Yan
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.483-497
    • /
    • 2023
  • Depth estimation is one of the most complicated and difficult problems to deal with in the light field. In this paper, a compound attention convolutional neural network (CAttNet) is proposed to extract depth maps from light field images. To make more effective use of the sub-aperture images (SAIs) of light field and reduce the redundancy in SAIs, we use a compound attention mechanism to weigh the channel and space of the feature map after extracting the primary features, so it can more efficiently select the required view and the important area within the view. We modified various layers of feature extraction to make it more efficient and useful to extract features without adding parameters. By exploring the characteristics of light field, we increased the network depth and optimized the network structure to reduce the adverse impact of this change. CAttNet can efficiently utilize different SAIs correlations and features to generate a high-quality light field depth map. The experimental results show that CAttNet has advantages in both accuracy and time.

High-qualtiy 3-D Video Generation using Scale Space (계위 공간을 이용한 고품질 3차원 비디오 생성 방법 -다단계 계위공간 개념을 이용해 깊이맵의 경계영역을 정제하는 고화질 복합형 카메라 시스템과 고품질 3차원 스캐너를 결합하여 고품질 깊이맵을 생성하는 방법-)

  • Lee, Eun-Kyung;Jung, Young-Ki;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.620-624
    • /
    • 2009
  • In this paper, we present a new camera system combining a high-quality 3-D scanner and hybrid camera system to generate a multiview video-plus-depth. In order to get the 3-D video using the hybrid camera system and 3-D scanner, we first obtain depth information for background region from the 3-D scanner. Then, we get the depth map for foreground area from the hybrid camera system. Initial depths of each view image are estimated by performing 3-D warping with the depth information. Thereafter, multiview depth estimation using the initial depths is carried out to get each view initial disparity map. We correct the initial disparity map using a belief propagation algorithm so that we can generate the high-quality multiview disparity map. Finally, we refine depths of the foreground boundary using extracted edge information. Experimental results show that the proposed depth maps generation method produces a 3-D video with more accurate multiview depths and supports more natural 3-D views than the previous works.

  • PDF

Region-Based Error Concealment of Depth Map in Multiview Video (영역 구분을 통한 다시점 영상의 깊이맵 손상 복구 기법)

  • Kim, Wooyeun;Shin, Jitae;Oh, Byung Tae
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.12
    • /
    • pp.2530-2538
    • /
    • 2015
  • The pixel value of depth image is depth value so that different objects which are placed on nearby position have similar pixel value. Moreover, the pixels of depth image have distinct pixel values compared to adjacent pixels while those of color image has very similar values. Accordingly distorted depth image of multiview video plus depth (MVD) needs proper error concealment methods considering the characteristics of depth image when transmission errors are happened. In this paper, classifying regions of depth image to consider edge directions and then applying adaptive error concealment methods to each region are proposed. Recovered depth images utilize with multiview video data to synthesize intermediate-view point video. The synthesized view is evaluated by objective quality metrics to demonstrate proposed method performance.

Recovering the Elevation Map by Stereo Modeling of the Aerial Image Sequence (연속 항공영상의 스테레오 모델링에 의한 지형 복원)

  • 강민석;김준식;박래홍;이쾌희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.64-75
    • /
    • 1993
  • This paper proposes a recovering technique of the elevation map by stereo modeling of the aerial image sequence which is transformed based on the aircraft situation. The area-based stereo matching method is simulated and the various parameters are experimentally chosen. In a depth extraction step, the depth is determined by solving the vector equation. The equation is suitable for stereo modeling of aerial images which do not satisfy the epipolar constraint. Also, the performance of the conventional feature-based matching scheme is compared. Finally, techniques analyzing the accuracy of the recovered elevation map (REM) are described. The analysis includes the error estimation for both height and contour lines, where the accuracy is based on the measurements of deviations from the estimates obtained manually. The experimental results show the efficiency of the proposed technique.

  • PDF

Implementation of a 3D Recognition applying Depth map and HMM (깊이 맵과 HMM을 이용한 인식 시스템 구현)

  • Han, Chang-Ho;Oh, Choon-Suk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.119-126
    • /
    • 2012
  • Recently, we used to recognize for human motions with some recognition algorithms. examples, HMM, DTW, PCA etc. In many human motions, we concentrated our research on recognizing fighting motions. In previous work, to obtain the fighting motion data, we used motion capture system which is developed with some active markers and infrared rays cameras and 3 dimension information converting algorithms by the stereo matching method. In this paper, we describe that the different method to acquiring 3 dimension fighting motion data and a HMM algorithm to recognize the data. One of the obtaining 3d data we used is depth map algorithm which is calculated by a stereo method. We test the 3d acquiring and the motion recognition system, and show the results of accuracy and performance results.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

A Study on 2D-3D Image Conversion using Depth Map Chart Analysis (깊이정보 지도 분석을 통한 2D-3D 영상 변환 연구)

  • Kim, In-Su;Kim, Hyung-Taek;Youn, Joo-Sang;Oh, Se-Woong;Seo, in-Seok;Kim, Nam-Gyu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.01a
    • /
    • pp.205-208
    • /
    • 2015
  • 3D 입체영상을 제작하기 위해서는 2D 영상제작에 비해 오랜 제작 기간과 많은 비용이 발생한다. 비용 절감을 위해 기존의 2D 영상을 3D 입체영상으로 변환하는 연구가 진행되고 있다. 2D 영상을 3D 입체영상으로 변환하는 방식은 자동변환방법과 수동변환방법으로 구분할 수 있으며, 고품질의 2D-3D 변환 영상을 획득하기 위해서는 깊이정보 지도(Depth map chart)를 활용한 수동변환 방법을 많이 사용되고 있다. 하지만 2D-3D 수동변환에 사용되는 깊이정보 지도의 정량적 분석 데이터가 부족하여 사용자가 변환한 이미지에 대한 정확한 기준 깊이값 설정이 어려운 단점이 있다. 본 논문에서는 깊이정보 지도의 깊이값 정보에 대한 정량적 분석 데이터를 바탕으로 한 2D-3D 수동변환 변화범위를 제시함으로써 적정한 영상 변화를 유도할 수 있도록 한다.

  • PDF