• Title/Summary/Keyword: 360video

Search Result 174, Processing Time 0.026 seconds

Improve Compression Efficiency of 360degree VR Video by Correcting Perspective in Cubemap Projection (Cubemap Projection 360도 VR 비디오에서 시점 보정을 통한 압축 효율 향상 방법)

  • Yoon, Sung Jea;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.136-139
    • /
    • 2017
  • Recently, many companies and consumers has shown a lot of interest toward VR(Virtual Reality), so many VR devices such as HMD(Head mounted Display) and 360 degree VR camera are released on the market. Current encoded 360 degree VR video uses the codec which originally made for the conventional 2D video. Therefore, the compression efficiency isn't optimized because the en/decoder does not consider the characteristics of the 360 degree VR video. In this paper, we propose a method to improve the compression efficiency by using the reference frame which compensates for the distortions caused by characteristics the 360 degree VR video. Applying the proposed method we were able to increase the compression efficiency by providing better prediction.

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

Implementing 3DoF+ 360 Video Compression System for Immersive Media (실감형 미디어를 위한 3DoF+ 360 비디오 압축 시스템 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Jang, Dongmin;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.743-754
    • /
    • 2019
  • System for three degrees of freedom plus (3DoF+) and 6DoF requires multi-view high resolution 360 video transmission to provide user viewport adaptive 360 video streaming. In this paper, we implement 3DoF+ 360 video compression system which removes the redundancy between multi-view videos and merges the residual into one video to provide high quality 360 video corresponding to an user's head movement efficiently. Implementations about 3D warping based redundancy removal method between 3DoF+ 360 videos and residual extraction and merger are explained in this paper. With the proposed system, 20.14% of BD-rate reduction in maximum is shown compared to traditional high-efficiency video coding (HEVC) based system.

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Study of Capturing Real-Time 360 VR 3D Game Video for 360 VR E-Sports Broadcast (360 VR E-Sports 중계를 위한 실시간 360 VR 3D Stereo 게임 영상 획득에 관한 연구)

  • Kim, Hyun Wook;Lee, Jun Suk;Yang, Sung Hyun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.876-885
    • /
    • 2018
  • Although e-sports broadcasting market based on VR(Virtual Reality) is growing in these days, technology development for securing market competitiveness is quite inadequate in Korea. Global companies such as SLIVER and Facebook already developed and are trying to commercialize 360 VR broadcasting technology which is able to broadcast e-sports in 4K 30FPS VR video. However, 2D video is too poor to use for 360 VR video in that it brings less immersive experience and dizziness and has low resolution in the scene. this paper, we not only proposed and implemented virtual camera technology which is able to capture in-game space as 360 video with 4K 3D by 60FPS for e-sports VR broadcasting but also verified feasibleness of obtaining stereo 360 video up to 4K/60FPS by conducting experiment after setting up virtual camera in sample games from game engine and commercial games.

Activated Viewport based Surveillance Event Detection in 360-degree Video (360도 영상 공간에서 활성 뷰포트 기반 이벤트 검출)

  • Shim, Yoo-jeong;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.770-775
    • /
    • 2020
  • Since 360-degree ERP frame structure has location-dependent distortion, existing video surveillance algorithms cannot be applied to 360-degree video. In this paper, an activated viewport based event detection method is proposed for 360-degree video. After extracting activated viewports enclosing object candidates, objects are finally detected in the viewports. These objects are tracked in 360-degree video space for region-based event detection. The proposed method is shown to improve the recall and the false negative rate more than 30% compared to the conventional method without activated viewports.

Tile-Based 360 Degree Video Streaming System with User's gaze Prediction (사용자 시선 예측을 통한 360 영상 타일 기반 스트리밍 시스템)

  • Lee, Soonbin;Jang, Dongmin;Jeong, Jong-Beom;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1053-1063
    • /
    • 2019
  • Recently, tile-based streaming that transmits one 360 video in several tiles, is actively being studied in order to transmit these 360 video more efficiently. In this paper, for the transmission of high-definition 360 video corresponding to user's viewport in tile-based streaming scenarios, a system of assigning the quality of tiles at each tile by applying the saliency map generated by existing network models is proposed. As a result of usage of Motion-Constrained Tile Set (MCTS) technique to encode each tile independently, the user's viewport was rendered and tested based on Salient360! dataset, streaming 360 video based on the proposed system results in gain to 23% of the user's viewport compared to using the existing high-efficiency video coding (HEVC).

Arrangement of narrative events and background in the contents of VR 360 video (VR 360 영상 콘텐츠에서의 서사적 사건 및 배경의 배치)

  • Lee, You-Na;Park, Jin-Wan
    • Journal of Digital Contents Society
    • /
    • v.19 no.9
    • /
    • pp.1631-1639
    • /
    • 2018
  • VR 360 video contents requires new visual language research in that the viewer inevitably makes partial appreciation unlike traditional video contents. In this study, we paid attention to the fact that arrangement of events and background elements in the 360-degree extended background of VR 360 video contents will play a major role in guiding the audience. Therefore, this study focuses on the arrangement of events and background elements from a narrative point of view, and analyzed the aspects of VR 360 video contents cases.

Performance Analysis of Viewport-dependent Tiled Streaming on 16K Ultra High-quality 360-degree Video (16K 초고화질 360도 영상에서의 사용자 시점 기반 타일 스트리밍 성능 검증)

  • Jeong, Jong-Beom;Lee, Soonbin;Kim, Inae;Ryu, Eun-Seok
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.1-8
    • /
    • 2021
  • Ultra high-quality and ultra high-resolution omnidirectional 360-degree video streaming is needed to provide immersive media through head-mounted display(HMD) in virtual reality environment, which requires high bandwidth and computational complexity. One of the approaches avoiding these problems is to apply viewport-dependent selective streaming using tile-based segmentation method. This paper presents a performance analysis of viewport-dependent tiled streaming on 16K ultra high-quality 360-degree videos and 4K 360-degree videos which are widely used. Experimental results showed 42.47% of bjotegaard delta rate(BD-rate) saving on 16K ultra high-quality 360-degree video tiled streaming compared to viewport-independent streaming while 4K 360-degree video showed 26.41% of BD-rate saving. Therefore, this paper verified that tiled streaming is more efficient on ultra-high quality video.

MPEG Omnidirectional Media Format (OMAF) for 360 Media (360 미디어를 위한 MPEG Omnidirectional Media Format (OMAF) 표준 기술)

  • Oh, Sejin
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.600-607
    • /
    • 2017
  • Virtual Reality (VR) has lately gained significant attention primarily driven by the recent market availability of consumer devices, such as mobile phone-based Head Mounted Displays (HMDs). Apart from classic gaming use cases, the delivery of $360^{\circ}$ video is considered as another major application and is expected to be ubiquitous in the near future. However, the delivery and decoding of high-resolution $360^{\circ}$ videos in desirable quality is a challenging task due to network limitations and constraints on available end device decoding and processing. In this paper, we focus on aspects of $360^{\circ}$ video streaming and provide an overview and discussion of possible solutions as well as considerations for future VR video streaming applications. This paper mainly focuses on the status of the standardization activities, Omnidirectional MediA Format (OMAF), to support interoperable $360^{\circ}$ video streaming services. More concretely, MPEG's ongoing work for OMA aims at harmonization of VR video platforms and applications. The paper also discusses the integration in MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH), which is considered as $360^{\circ}$ video streaming services with OMAF content. In context of the general OMAF service architecture.