• Title/Summary/Keyword: virtual views

Search Result 87, Processing Time 0.024 seconds

A New Copyright Protection Scheme for Depth Map in 3D Video

  • Li, Zhaotian;Zhu, Yuesheng;Luo, Guibo;Guo, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3558-3577
    • /
    • 2017
  • In 2D-to-3D video conversion process, the virtual left and right view can be generated from 2D video and its corresponding depth map by depth image based rendering (DIBR). The depth map plays an important role in conversion system, so the copyright protection for depth map is necessary. However, the provided virtual views may be distributed illegally and the depth map does not directly expose to viewers. In previous works, the copyright information embedded into the depth map cannot be extracted from virtual views after the DIBR process. In this paper, a new copyright protection scheme for the depth map is proposed, in which the copyright information can be detected from the virtual views even without the depth map. The experimental results have shown that the proposed method has a good robustness against JPEG attacks, filtering and noise.

Virtual Viewpoint Image Synthesis Algorithm using Multi-view Geometry (다시점 카메라 모델의 기하학적 특성을 이용한 가상시점 영상 생성 기법)

  • Kim, Tae-June;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1154-1166
    • /
    • 2009
  • In this paper, we propose algorithms for generating high quality virtual intermediate views on the baseline or out of baseline. In this proposed algorithm, depth information as well as 3D warping technique is used to generate the virtual views. The coordinate of real 3D image is calculated from the depth information and geometrical characteristics of camera and the calculated 3D coordinate is projected to the 2D plane at arbitrary camera position and results in 2D virtual view image. Through the experiments, we could show that the generated virtual view image on the baseline by the proposed algorithm has better PSNR at least by 0.5dB and we also could cover the occluded regions more efficiently for the generated virtual view image out of baseline by the proposed algorithm.

Augmented Reality Using Projective Information (비유클리드공간 정보를 사용하는 증강현실)

  • 서용덕;홍기상
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.87-102
    • /
    • 1999
  • We propose an algorithm for augmenting a real video sequence with views of graphics ojbects without metric calibration of the video camera by representing the motion of the video camera in projective space. We define a virtual camera, through which views of graphics objects are generated. attached to the real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has a function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows theoretical and experimental results of our application of non-metric vision to augmented reality.

  • PDF

Reduced Reference Quality Metric for Synthesized Virtual Views in 3DTV

  • Le, Thanh Ha;Long, Vuong Tung;Duong, Dinh Trieu;Jung, Seung-Won
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1114-1123
    • /
    • 2016
  • Multi-view video plus depth (MVD) has been widely used owing to its effectiveness in three-dimensional data representation. Using MVD, color videos with only a limited number of real viewpoints are compressed and transmitted along with captured or estimated depth videos. Because the synthesized views are generated from decoded real views, their original reference views do not exist at either the transmitter or receiver. Therefore, it is challenging to define an efficient metric to evaluate the quality of synthesized images. We propose a novel metric-the reduced-reference quality metric. First, the effects of depth distortion on the quality of synthesized images are analyzed. We then employ the high correlation between the local depth distortions and local color characteristics of the decoded depth and color images, respectively, to achieve an efficient depth quality metric for each real view. Finally, the objective quality metric of the synthesized views is obtained by combining all the depth quality metrics obtained from the decoded real views. The experimental results show that the proposed quality metric correlates very well with full reference image and video quality metrics.

Scalable Coding of Depth Images with Synthesis-Guided Edge Detection

  • Zhao, Lijun;Wang, Anhong;Zeng, Bing;Jin, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4108-4125
    • /
    • 2015
  • This paper presents a scalable coding method for depth images by considering the quality of synthesized images in virtual views. First, we design a new edge detection algorithm that is based on calculating the depth difference between two neighboring pixels within the depth map. By choosing different thresholds, this algorithm generates a scalable bit stream that puts larger depth differences in front, followed by smaller depth differences. A scalable scheme is also designed for coding depth pixels through a layered sampling structure. At the receiver side, the full-resolution depth image is reconstructed from the received bits by solving a partial-differential-equation (PDE). Experimental results show that the proposed method improves the rate-distortion performance of synthesized images at virtual views and achieves better visual quality.

Performance Analysis on View Synthesis of 360 Video for Omnidirectional 6DoF

  • Kim, Hyun-Ho;Lee, Ye-Jin;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.22-24
    • /
    • 2018
  • MPEG-I Visual group is actively working on enhancing immersive experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene from another viewpoint (another position in space) requires rendering additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance analysis on view synthesis, which is done as the exploration experiment (EE) in MPEG-I, from a set of 360 videos providing omnidirectional 6DoF in various ways with different distances, directions, and number of input views. In addition, we compared the subjective quality between synthesized images with one input view and two input views.

  • PDF

Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV

  • Park, Jong-Il;Um, Gi-Mun;Ahn, Chung-Hyun;Ahn, Chie-Teuk
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.597-604
    • /
    • 2004
  • In stereoscopic television, there is a trade-off between visual comfort and 3-dimensional (3D) impact with respect to the baseline-stretch of a 3DTV camera. It is necessary to adjust the baseline-stretch at an appropriate the distance depending on the contents of a scene if we want to obtain a subjectively optimal quality of an image. However, it is very hard to obtain a small baseline-stretch using commercially available cameras of broadcasting quality where the sizes of the lens and CCD module are large. In order to overcome this limitation, we attempt to freely control the baseline-stretch of a stereoscopic camera by synthesizing the virtual views at the desired location of interval between two cameras. This proposed technique is based on the stereo matching and view synthesis techniques. We first obtain a dense disparity map using a hierarchical stereo matching with the edge-adaptive multiple shifted windows. Then, we synthesize the virtual views using the disparity map. Simulation results with various stereoscopic images demonstrate the effectiveness of the proposed technique.

  • PDF

Augmented System for Immersive 3D Expansion and Interaction

  • Yang, Ungyeon;Kim, Nam-Gyu;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.149-158
    • /
    • 2016
  • In the field of augmented reality technologies, commercial optical see-through-type wearable displays have difficulty providing immersive visual experiences, because users perceive different depths between virtual views on display surfaces and see-through views to the real world. Many cases of augmented reality applications have adopted eyeglasses-type displays (EGDs) for visualizing simple 2D information, or video see-through-type displays for minimizing virtual- and real-scene mismatch errors. In this paper, we introduce an innovative optical see-through-type wearable display hardware, called an EGD. In contrast to common head-mounted displays, which are intended for a wide field of view, our EGD provides more comfortable visual feedback at close range. Users of an EGD device can accurately manipulate close-range virtual objects and expand their view to distant real environments. To verify the feasibility of the EGD technology, subject-based experiments and analysis are performed. The analysis results and EGD-related application examples show that EGD is useful for visually expanding immersive 3D augmented environments consisting of multiple displays.

VIRTUAL VIEW RENDERING USING MULTIPLE STEREO IMAGES

  • Ham, Bum-Sub;Min, Dong-Bo;Sohn, Kwang-Hoon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.233-237
    • /
    • 2009
  • This paper represents a new approach which addresses quality degradation of a synthesized view, when a virtual camera moves forward. Generally, interpolation technique using only two neighboring views is used when a virtual view is synthesized. Because a size of the object increases when the virtual camera moves forward, most methods solved this by interpolation in order to synthesize a virtual view. However, as it generates a degraded view such as blurred images, we prevent a synthesized view from being blurred by using more cameras in multiview camera configuration. That is, we solve this by applying super-resolution concept which reconstructs a high resolution image from several low resolution images. Therefore, data fusion is executed by geometric warping using a disparity of the multiple images followed by deblur operation. Experimental results show that the image quality can further be improved by reducing blur in comparison with interpolation method.

  • PDF

A Depth-based Disocclusion Filling Method for Virtual Viewpoint Image Synthesis (가상 시점 영상 합성을 위한 깊이 기반 가려짐 영역 메움법)

  • Ahn, Il-Koo;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.48-60
    • /
    • 2011
  • Nowadays, the 3D community is actively researching on 3D imaging and free-viewpoint video (FVV). The free-viewpoint rendering in multi-view video, virtually move through the scenes in order to create different viewpoints, has become a popular topic in 3D research that can lead to various applications. However, there are restrictions of cost-effectiveness and occupying large bandwidth in video transmission. An alternative to solve this problem is to generate virtual views using a single texture image and a corresponding depth image. A critical issue on generating virtual views is that the regions occluded by the foreground (FG) objects in the original views may become visible in the synthesized views. Filling this disocclusions (holes) in a visually plausible manner determines the quality of synthesis results. In this paper, a new approach for handling disocclusions using depth based inpainting algorithm in synthesized views is presented. Patch based non-parametric texture synthesis which shows excellent performance has two critical elements: determining where to fill first and determining what patch to be copied. In this work, a noise-robust filling priority using the structure tensor of Hessian matrix is proposed. Moreover, a patch matching algorithm excluding foreground region using depth map and considering epipolar line is proposed. Superiority of the proposed method over the existing methods is proved by comparing the experimental results.