• Title/Summary/Keyword: Immersive video

Search Result 130, Processing Time 0.025 seconds

Carriage of Volumetric Visual Video based Coding(V3C) 국제표준 기술 동향

  • Nam, Gwi-Jung;Kim, Gyu-Heon
    • Broadcasting and Media Magazine
    • /
    • v.26 no.2
    • /
    • pp.46-55
    • /
    • 2021
  • 최근 디바이스와 5G 통신의 비약적인 발전을 통해 가상/증강 현실 분야, 자율 주행 등 3차원 그래픽스 기술에 대한 연구가 활발하게 진행되고 있으며, 3차원 정보를 면밀하게 표현할 수 있는 포인트 클라우드와 다시점 초실감 콘텐츠가 주목받고 있다. 이와 같은 콘텐츠는 전통적인 2D 비디오 대비 많은 데이터를 사용하고 있기에, 효율적 사용을 위해서는 압축이 필수적으로 요구된다. 이에 따라 국제표준화기구인 ISO/IEC 산하 Moving Picture Expert Group(MPEG)에서는 고밀도 포인트 클라우드 및 초다시점 실감형 콘텐츠에 대한 압축 방안으로 V-PCC(Video based Point Cloud Compression) 및 MIV(MPEG Immersive Video) 기술을 표준화 중에 있으며, 또한, 압축된 데이터를 효율적으로 저장, 전송하기 위한 방안으로 Carriage of Visual Volumetric Video Coding(V3C) 표준화가 진행중에 있다. 본 고에서는 MPEG에서 진행중인 V3C 표준 기술에 대하여 살펴보고자 한다.

Multi-View Video Processing: IVR, Graphics Composition, and Viewer

  • Kwon, Jun-Sup;Hwang, Won-Young;Choi, Chang-Yeol;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.12 no.4
    • /
    • pp.333-341
    • /
    • 2007
  • Multi-view video has recently gained much attraction from academic and commercial fields because it can deliver the immersive viewing of natural scenes. This paper presents multi-view video processing being composed of intermediate view reconstruction (IVR), graphics composition, and multi-view video viewer. First we generate virtual views between multi-view cameras using depth and texture images of the input videos. Then we mix graphic objects to the generated view images. The multi-view video viewer is developed to examine the reconstructed images and composite images. As well, it can provide users with some special effects of multi-view video. We present experimental results that validate our proposed method and show that graphic objects could become the inalienable part of the multi-view video.

Augmented System for Immersive 3D Expansion and Interaction

  • Yang, Ungyeon;Kim, Nam-Gyu;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.149-158
    • /
    • 2016
  • In the field of augmented reality technologies, commercial optical see-through-type wearable displays have difficulty providing immersive visual experiences, because users perceive different depths between virtual views on display surfaces and see-through views to the real world. Many cases of augmented reality applications have adopted eyeglasses-type displays (EGDs) for visualizing simple 2D information, or video see-through-type displays for minimizing virtual- and real-scene mismatch errors. In this paper, we introduce an innovative optical see-through-type wearable display hardware, called an EGD. In contrast to common head-mounted displays, which are intended for a wide field of view, our EGD provides more comfortable visual feedback at close range. Users of an EGD device can accurately manipulate close-range virtual objects and expand their view to distant real environments. To verify the feasibility of the EGD technology, subject-based experiments and analysis are performed. The analysis results and EGD-related application examples show that EGD is useful for visually expanding immersive 3D augmented environments consisting of multiple displays.

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model

  • Kim, Soowoong;Kang, Jungwon
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.51-61
    • /
    • 2022
  • In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.

MPEG-DASH based 3D Point Cloud Content Configuration Method (MPEG-DASH 기반 3차원 포인트 클라우드 콘텐츠 구성 방안)

  • Kim, Doohwan;Im, Jiheon;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.660-669
    • /
    • 2019
  • Recently, with the development of three-dimensional scanning devices and multi-dimensional array cameras, research is continuously conducted on techniques for handling three-dimensional data in application fields such as AR (Augmented Reality) / VR (Virtual Reality) and autonomous traveling. In particular, in the AR / VR field, content that expresses 3D video as point data has appeared, but this requires a larger amount of data than conventional 2D images. Therefore, in order to serve 3D point cloud content to users, various technological developments such as highly efficient encoding / decoding and storage, transfer, etc. are required. In this paper, V-PCC bit stream created using V-PCC encoder proposed in MPEG-I (MPEG-Immersive) V-PCC (Video based Point Cloud Compression) group, It is defined by the MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard, and provides to be composed of segments. Also, in order to provide the user with the information of the 3D coordinate system, the depth information parameter of the signaling message is additionally defined. Then, we design a verification platform to verify the technology proposed in this paper, and confirm it in terms of the algorithm of the proposed technology.

Real-Time Copyright Security Scheme of Immersive Content based on HEVC (HEVC 기반의 실감형 콘텐츠 실시간 저작권 보호 기법)

  • Yun, Chang Seob;Jun, Jae Hyun;Kim, Sung Ho;Kim, Dae Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.27-34
    • /
    • 2021
  • In this paper, we propose a copyright protection scheme for real-time streaming of HEVC(High Efficiency Video Coding) based realistic content. Previous research uses encryption and modular operation for copyright pre-protection and copyright post-protection, which causes delays in ultra high resolution video. The proposed scheme maximizes parallelism by using thread pool based DRM(Digital Rights Management) packaging with only HEVC's CABAC(Context Adaptive Binary Arithmetic Coding) codec and GPU based high-speed bit operation(XOR), thus enabling real-time copyright protection. As a result of comparing this scheme with previous research at three resolutions, PSNR showed an average of 8 times higher performance, and the process speed showed an average of 18 times difference. In addition, as a result of comparing the robustness of the forensic mark, the filter and noise attack, which showed the largest and smallest difference, with a 27-fold difference in recompression attacks, showed an 8-fold difference.

A Proposal for Zoom-in/out View Streaming based on Object Information of Free Viewpoint Video

  • Seo, Minjae;Paik, Jong-Ho;Park, Gooman
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.929-946
    • /
    • 2022
  • Free viewpoint video (FVV) service is an immersive media service that allows a user to watch it from a desired location or viewpoint. It is composed of various forms according to the direction of the viewpoint of the provided video, and includes zoom in/out in the service. As consumers' demand for active watching is increasing, the importance of FVV services is expected to grow gradually. However, additional considerations are needed to seamlessly stream FVV service. FVV includes a plurality of videos, video changes may occur frequently due to movement of the viewpoint. Frequent occurrence of video switching or re-request another video can cause service delay and it also can lower user's quality of service (QoS). In this case, we assumed that if a video showing an object that the user wants to watch is selected and provided, it is highly likely to meet the needs of the viewer. In particular, it is important to provide an object-oriented FVV service when zooming in. When video zooming in in the usual way, it cannot be guaranteed to zoom in around the object. Zoom function does not consider about video viewing. It only considers the viewing screen size and it crop the video view as fixed screen location. To solve this problem, we propose a zoom in/out method of object-centered dynamic adaptive streaming of FVV in this paper. Through the method proposed in this paper, users can enjoy the optimal video service because they are provided with the desired object-based video.

Geometry Padding for Segmented Sphere Projection (SSP) in 360 Video (360 비디오의 SSP를 위한 기하학적 패딩)

  • Kim, Hyun-Ho;Myeong, Sang-Jin;Yoon, Yong-Uk;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.25-31
    • /
    • 2019
  • 360 video is attracting attention as immersive media, and is also considered in VVC (Versatile Video Coding), which is being developed in JVET (Joint Video Expert Team) as a new video coding standard of post-HEVC. A 2D image projected from 360 video for its compression may has discontinuities between the projected faces and inactive regions, and they may cause the visual artifacts in the reconstructed video as well as decrease of coding efficiency. In this paper, we propose a method of efficient geometric padding to reduce these discontinuities and inactive regions in the projection format of SSP (Segmented Sphere Projection). Experimental results show that the proposed method improves subjective quality compared to the existing padding of SSP that uses copy padding with minor loss of coding gain.

Fast Extraction of Objects of Interest from Images with Low Depth of Field

  • Kim, Chang-Ick;Park, Jung-Woo;Lee, Jae-Ho;Hwang, Jenq-Neng
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.353-362
    • /
    • 2007
  • In this paper, we propose a novel unsupervised video object extraction algorithm for individual images or image sequences with low depth of field (DOF). Low DOF is a popular photographic technique which enables the representation of the photographer's intention by giving a clear focus only on an object of interest (OOI). We first describe a fast and efficient scheme for extracting OOIs from individual low-DOF images and then extend it to deal with image sequences with low DOF in the next part. The basic algorithm unfolds into three modules. In the first module, a higher-order statistics map, which represents the spatial distribution of the high-frequency components, is obtained from an input low-DOF image. The second module locates the block-based OOI for further processing. Using the block-based OOI, the final OOI is obtained with pixel-level accuracy. We also present an algorithm to extend the extraction scheme to image sequences with low DOF. The proposed system does not require any user assistance to determine the initial OOI. This is possible due to the use of low-DOF images. The experimental results indicate that the proposed algorithm can serve as an effective tool for applications, such as 2D to 3D and photo-realistic video scene generation.

  • PDF

Luminance Compensation using Feature Points and Histogram for VR Video Sequence (특징점과 히스토그램을 이용한 360 VR 영상용 밝기 보상 기법)

  • Lee, Geon-Won;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.808-816
    • /
    • 2017
  • 360 VR video systems has become important to provide immersive effect for viewers. The system consists of stitching, projection, compression, inverse projection, viewport extraction. In this paper, an efficient luminance compensation technique for 360 VR video sequences, where feature extraction and histogram equalization algorithms are utilized. The proposed luminance compensation algorithm enhance the performance of stitching in 360 VR system. The simulation results showed that the proposed technique is useful to increase the quality of the displayed image.