• Title/Summary/Keyword: 2D-to-3D Video Conversion

Search Result 37, Processing Time 0.036 seconds

Pattern-based Depth Map Generation for Low-complexity 2D-to-3D Video Conversion (저복잡도 2D-to-3D 비디오 변환을 위한 패턴기반의 깊이 생성 알고리즘)

  • Han, Chan-Hee;Kang, Hyun-Soo;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.31-39
    • /
    • 2015
  • 2D-to-3D video conversion vests 3D effects in a 2D video by generating stereoscopic views using depth cues inherent in the 2D video. This technology would be a good solution to resolve the problem of 3D content shortage during the transition period to the full ripe 3D video era. In this paper, a low-complexity depth generation method for 2D-to-3D video conversion is presented. For temporal consistency in global depth, a pattern-based depth generation method is newly introduced. A low-complexity refinement algorithm for local depth is also provided to improve 3D perception in object regions. Experimental results show that the proposed method outperforms conventional methods in terms of complexity and subjective quality.

A Trend Study on 2D to 3D Video Conversion Technology using Analysis of Patent Data (특허 분석을 통한 2D to 3D 영상 데이터 변환 기술 동향 연구)

  • Kang, Michael M.;Lee, Wookey;Lee, Rich. C.
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.495-504
    • /
    • 2014
  • This paper present a strategy of intellectual property acquisition and core technology development direction using analysis of 2D to 3D video conversion technology patent data. As a result of analysis of trends in patent 2D to 3D technology, it is very promising technology field. Using a strategic patent map using research of patent trend, you will keep ahead of the competition in 2D3D image data conversion market.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

Real-Time 2D-to-3D Conversion for 3DTV using Time-Coherent Depth-Map Generation Method

  • Nam, Seung-Woo;Kim, Hye-Sun;Ban, Yun-Ji;Chien, Sung-Il
    • International Journal of Contents
    • /
    • v.10 no.3
    • /
    • pp.9-16
    • /
    • 2014
  • Depth-image-based rendering is generally used in real-time 2D-to-3D conversion for 3DTV. However, inaccurate depth maps cause flickering issues between image frames in a video sequence, resulting in eye fatigue while viewing 3DTV. To resolve this flickering issue, we propose a new 2D-to-3D conversion scheme based on fast and robust depth-map generation from a 2D video sequence. The proposed depth-map generation algorithm divides an input video sequence into several cuts using a color histogram. The initial depth of each cut is assigned based on a hypothesized depth-gradient model. The initial depth map of the current frame is refined using color and motion information. Thereafter, the depth map of the next frame is updated using the difference image to reduce depth flickering. The experimental results confirm that the proposed scheme performs real-time 2D-to-3D conversions effectively and reduces human eye fatigue.

Stereo Audio Matched with 3D Video (3D영상에 정합되는 스테레오 오디오)

  • Park, Sung-Wook;Chung, Tae-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.153-158
    • /
    • 2011
  • This paper presents subjective experimental results to understand how audio should be changed when a video clip is watched in 3D than 2D. This paper divided auditory perceptual information into two categories; distance and azimuth that a sound source contributes mostly, and spaciousness that scene or environment contribute mostly. According to the experiment for distance and azimuth, i.e. sound localization, we found that distance and azimuth of sound sources were magnified when heard with 3D than 2D video. This lead us to conclude 3D sound for localization should be designed to have more distance and azimuth than 2D sound. Also we found 3D sound are preferred to be played with not only 3D video clip but also 2D video clip. According to the experiment for spaciousness, we found people prefer sound with more reverberation when they watch 3D video clips than 2D video clips. This can be understood that 3D video provides more spacial information than 2D video. Those subjective experimental results can help audio engineer familiar with 2D audio to create 3D audio, and be fundamental information of future research to make 2D to 3D audio conversion system. Furthermore when designing 3D broadcasting system with limited bandwidth and with 2D TV supportive, we propose to consider transmitting stereoscopic video, audio with enhanced localization, and metadata for TV sets to generate reverberation for spaciousness.

Adaptive Depth Fusion based on Reliability of Depth Cues for 2D-to-3D Video Conversion (2차원 동영상의 3차원 변환을 위한 깊이 단서의 신뢰성 기반 적응적 깊이 융합)

  • Han, Chan-Hee;Choi, Hae-Chul;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.1-13
    • /
    • 2012
  • 3D video is regarded as the next generation contents in numerous applications. The 2D-to-3D video conversion technologies are strongly required to resolve a lack of 3D videos during the period of transition to the full ripe 3D video era. In 2D-to-3D conversion methods, after the depth image of each scene in 2D video is estimated, stereoscopic video is synthesized using DIBR (Depth Image Based Rendering) technologies. This paper proposes a novel depth fusion algorithm that integrates multiple depth cues contained in 2D video to generate stereoscopic video. For the proper depth fusion, it is checked whether some cues are reliable or not in current scene. Based on the result of the reliability tests, current scene is classified into one of 4 scene types and scene-adaptive depth fusion is applied to combine those reliable depth cues to generate the final depth information. Simulation results show that each depth cue is reasonably utilized according to scene types and final depth is generated by cues which can effectively represent the current scene.

3D Conversion of 2D Video Encoded by H.264

  • Hong, Ho-Ki;Ko, Min-Soo;Seo, Young-Ho;Kim, Dong-Wook;Yoo, Ji-Sang
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.6
    • /
    • pp.990-1000
    • /
    • 2012
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using two cameras conventionally. Very accurate motion vectors are available in H.264 bit streams because of the availability of a variety of block sizes. 2D/3D conversion algorithm proposed in this paper can create left and right images by using extracted motion information. Image type of a given image is first determined from the extracted motion information and each image type gives a different conversion algorithm. The cut detection has also been performed in order to prevent overlapping of two totally different scenes for left and right images. We show an improved performance of the proposed algorithm through experimental results.

2D/3D conversion algorithm on broadcast and mobile environment and the platform (방송 및 모바일 실감형 2D/3D 컨텐츠 변환 방법 및 플랫폼)

  • Song, Hyok;Bae, Jin-Woo;Yoo, Ji-Sang;Choi, Byeoung-Ho
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.386-389
    • /
    • 2007
  • TV technology started from black and white TV. Color TV invented and users request more realistic TV technology. The next technology is 3DTV. For 3DTV, 3D display technology, 3D coding technology, digital mux/demux technology in broadcast and 3D video acquisition are needed. Moreover, Almost every contents now exist are 2D contents. It causes necessity to convert from 2D to 3D. This article describes 2D/3D conversion algorithm and H/W platform on FPGA board. Time difference makes 3D effect and convolution filter increased the effect. Distorted image and original image give 3D effect. The algorithm is shown on 3D display. The display device shows 3D effect by parallax barrier method and has FPGA board.

  • PDF

Generation of Stereoscopic Image from 2D Image based on Saliency and Edge Modeling (관심맵과 에지 모델링을 이용한 2D 영상의 3D 변환)

  • Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.368-378
    • /
    • 2015
  • 3D conversion technology has been studied over past decades and integrated to commercial 3D displays and 3DTVs. The 3D conversion plays an important role in the augmented functionality of three-dimensional television (3DTV), because it can easily provide 3D contents. Generally, depth cues extracted from a static image is used for generating a depth map followed by DIBR (Depth Image Based Rendering) rendering for producing a stereoscopic image. However except some particular images, the existence of depth cues is rare so that the consistent quality of a depth map cannot be accordingly guaranteed. Therefore, it is imperative to make a 3D conversion method that produces satisfactory and consistent 3D for diverse video contents. From this viewpoint, this paper proposes a novel method with applicability to general types of image. For this, saliency as well as edge is utilized. To generate a depth map, geometric perspective, affinity model and binomic filter are used. In the experiments, the proposed method was performed on 24 video clips with a variety of contents. From a subjective test for 3D perception and visual fatigue, satisfactory and comfortable viewing of 3D contents was validated.

3D Conversion of 2D H.264 Video (2D H.264 동영상의 3D 입체 변환)

  • Hong, Ho-Ki;Baek, Yun-Ki;Lee, Seung-Hyun;Kim, Dong-Wook;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.12C
    • /
    • pp.1208-1215
    • /
    • 2006
  • In this paper, we propose an algorithm that creates three-dimensional (3D) stereoscopic video from two-dimensional (2D) video encoded by H.264 instead of using the conventional stereo-camera process. Motion information of each frame can be obtained by the given motion vectors in most of videos encoded by MPEG standards. Especially, we have accurate motion vectors for H.264 streams because of the availability of a variety of block sizes. 2D/3D video conversion algorithm proposed in this paper can create the left and right images that correspond to the original image by using cut detection method, delay factors, motion types, and image types. We usually have consistent motion type na direction in a given cut because each frame in the same cut has high correlation. We show the improved performance of the proposed algorithm through experimental results.