• Title/Summary/Keyword: 360 degree video

Search Result 75, Processing Time 0.032 seconds

A Study on the Benefits and Issues of 360-degree VR Performance Videos (360도 VR공연영상의 효과와 문제점 연구)

Compression Efficiency Evaluation for Virtual Reality Videos by Projection Scheme

  • Kim, Byeong Chul;Rhee, Chae Eun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.2
    • /
    • pp.102-108
    • /
    • 2017
  • Videos for 360-degree virtual reality (VR) systems have a large amount of data because they are made with several different videos from multiple cameras. To store the VR data in limited space or to transmit it through a channel with limited bandwidth, the data need to be compressed at a high ratio. This paper focuses on the compression efficiency of VR videos for good visual quality. Generally, 360-degree VR videos should be projected into the planer format to cope with modern video coding standards. Among various projection schemes, three typical schemes (equirectangular, line-cubic, and cross-cubic) are selected and compared in terms of compression efficiency and quality using various videos.

A Patch Packing Method Using Guardband for Efficient 3DoF+ Video Coding (3DoF+ 비디오의 효율적인 부호화를 위한 보호대역을 사용한 패치 패킹 기법)

  • Kim, Hyun-Ho;Kim, Yong-Ju;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.185-191
    • /
    • 2020
  • MPEG-I is actively working on standardization on the immersive video coding which provides up to 6 degree of freedom (6DoF) in terms of viewpoint. In a virtual space of 3DoF+, which is defined as an extension of 360 with motion parallax, looking at the scene from another viewpoint (another position in space) requires rendering an additional viewpoint using multiple videos included in the 3DoF+ video. In the MPEG-I Visual workgroup, efficient coding methods for 3DoF+ video are being studied, and they released Test Model for Immersive Video (TMIV) recently. This paper presents a patch packing method which packs the patches into atlases efficiently for improving coding efficiency of 3DoF+ video in TMIV. The proposed method improves the reconstructed view quality with reduced coding artifacts by introducing guardbands between patches in the atlas.

Stitching Method of Videos Recorded by Multiple Handheld Cameras (다중 사용자 촬영 영상의 영상 스티칭)

  • Billah, Meer Sadeq;Ahn, Heejune
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.22 no.3
    • /
    • pp.27-38
    • /
    • 2017
  • This Paper Presents a Method for Stitching a Large Number of Images Recorded by a Large Number of Individual Users Through a Cellular Phone Camera at a Venue. In Contrast to 360 Camera Solutions that Use Existing Fixed Rigs, these Conditions must Address New Challenges Such as Time Synchronization, Repeated Transformation Matrix Calculations, and Camera Sensor Mismatch Correction. In this Paper, we Solve this Problem by Updating the Transformation Matrix Using Time Synchronization Method Using Audio, Sensor Mismatch Removal by Color Transfer Method, and Global Operation Stabilization Algorithm. Experimental Results Show that the Proposed Algorithm Shows better Performance in Terms of Computation Speed and Subjective Image Quality than that of Screen Stitching.

A Study on Immersion and Presence of VR Karaoke Room Implementations in Mobile HMD Environments (HMD 모바일 환경에서 가상현실 기반 노래방 구현물의 실재감과 몰입감 연구)

  • Kim, Ki-Hong;Seo, Beomjoo
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.19-28
    • /
    • 2017
  • There exist a variety of VR(Virtual Reality) contents that have been developed by the use of the latest VR technologies. Unlike the rapid advances in the recent VR devices however, the development of VR based game contents that fully utilize such cutting-edge devices has been lackluster. Using more accessible form of smartphone-based HMDs(Head-Mounted Displays), we compare two popular VR presentation methods(a realistic 3D VR karaoke room and a 360 degree video karaoke room) and analyze their users' immersion and realistic perception. We expect that our study can be utilized as a supporting guideline for future smartphone-based VR content developments.

Decoder-adaptive Single-layer Tile Binding for Viewport-dependent 360-degree Video Tiled Streaming (사용자 시점 기반 360 도 영상 타일 스트리밍을 위한 복호기 적응적인 단일 계층 타일 바인딩)

  • Jang, Mi;Jeong, Jong-Beom;Lee, Soonbin;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.839-842
    • /
    • 2022
  • 실감적인 가상 현실을 위해서는 고화질의 360 도 영상 스트리밍이 필요하다. 그러나 이는 높은 대역폭과 연산량을 요구하기 때문에 일반적인 가상 현실 기기로는 감당하기 힘들다. 이를 보완하기 위한 360 도 영상 부호화 및 전송 기술이 활발히 연구되고 있으며, 대표적으로 사용자 시점 기반 타일 스트리밍 기법 등이 있다. 본 논문은 기존의 CTU 기반 스트리밍과 타일 기반 스트리밍과 함께 복호기 적응적인 단일 계층 타일 바인딩을 활용한 타일기반 스트리밍의 부호화 및 복호화 성능을 비교한다. 수행된 실험결과, 단일 계층 타일 바인딩을 활용한 타일 스트리밍 방법이 기존의 타일 스트리밍 기법에 비해 유사한 비트율 성능에 대비하여 복호화 시간에서 큰 이득을 볼 수 있음을 확인하였다.

  • PDF

Performance Analysis on View Synthesis of 360 Video for Omnidirectional 6DoF

  • Kim, Hyun-Ho;Lee, Ye-Jin;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.22-24
    • /
    • 2018
  • MPEG-I Visual group is actively working on enhancing immersive experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene from another viewpoint (another position in space) requires rendering additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance analysis on view synthesis, which is done as the exploration experiment (EE) in MPEG-I, from a set of 360 videos providing omnidirectional 6DoF in various ways with different distances, directions, and number of input views. In addition, we compared the subjective quality between synthesized images with one input view and two input views.

  • PDF

Timeline Synchronization of Multiple Videos Based on Waveform (소리 파형을 이용한 다수 동영상간 시간축 동기화 기법)

  • Kim, Shin;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.197-205
    • /
    • 2018
  • Panoramic image is one of the technologies that are commonly used today. However, technical difficulties still exist in panoramic video production. Without a special camera such as a 360-degree camera, making panoramic video becomes more difficult. In order to make a panoramic video, it is necessary to synchronize the timeline of multiple videos shot at multiple locations. However, the timeline synchronization method using the internal clock of the camera may cause an error due to the difference of the internal hardware. In order to solve this problem, timeline synchronization between multiple videos using visual information or auditory information has been studied. However, there is a problem in accuracy and processing time when using video information, and there is a problem in that, when using audio information, there is no synchronization when there is sensitivity to noise or there is no melody. Therefore, in this paper, we propose a timeline synchronization method between multiple video using audio waveform. It shows higher synchronization accuracy and temporal efficiency than the video information based time synchronization method.

Research on Utilizing Volumetric Studio for XR Content Production (XR 콘텐츠 제작을 위한 볼류메트릭 스튜디오 활용 연구)

  • Sukchang Lee;Won Ho Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.849-857
    • /
    • 2023
  • Volumetric Studio is catalyzing the expansion of the XR content market. Consequently, there is a rising demand for in-depth research on volumetric capture technology. This research delves into the methodology and outcomes of capturing dancers' movements in the form of 3D video images. Furthermore, this research examines the practical applications of volumetric capture technology by assessing the infrastructure and operational workflow of the studio specializing in this domain, aiming to derive significant findings. Notably, this research highlights constraints associated with video image distortion and extended rendering durations within Volumetric Studio system.

Development of Cloud Photo-stitching Application for 360 degree video converting based HTML5 (360도 영상 변환을 위한 HTML5기반 클라우드 포토스티칭 어플리케이션 구현)

  • Yoo, Sunggeun;Jung, Seo-kyung;Park, Sangil
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.11a
    • /
    • pp.1-2
    • /
    • 2017
  • 최근 기존의 설치형 어플리케이션으로 제공되던 워드 프로세서나 스프레드 시트 등이 RIA(Rich Internet Application)와 Ajax(Asynchronous JavaScript and XML)같은 기술의 발달로 구글 독스(Google Docs)와 같이 웹브라우저(Web Browser)에서 동작하는 형태의 클라우드 기반 웹 어플리케이션으로 제작되어 널리 사용되고 있다. 또한 기존의 웹 기술로는 구현하기 어려웠던 동영상 포맷 변환이나 사진이나 동영상에 필터를 적용하는 것과 같은 영상처리가 가능한 클라우드 기반 웹 어플리케이션이 등장하고 있는 실정이다. 이에 본 논문은 컴퓨팅 자원을 많이 사용하고, 장시간의 변환이 필요한 360도 영상의 변환과정에 꼭 필요한 기술인 포토스티칭(Photo-stitching)을 클라우드에서 가능하게 하는 프론트엔드 및 백엔드를 구현하고, 웹소켓(Websocket)기술을 활용하여 실시간으로 변환결과를 전달 받을 수 있도록 하였다.

  • PDF