• Title/Summary/Keyword: 360 Video Transmission

Search Result 11, Processing Time 0.02 seconds

Implementing Renderer for Viewport Dependent 360 Video (사용자 시점 기반 360 영상을 위한 렌더러 구현)

  • Jang, Dongmin;Son, Jang-Woo;Jeong, JongBeom;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.747-759
    • /
    • 2018
  • In this paper, we implement viewport dependent tile partitioning for high quality 360 video transmission and rendering method to present a HMD (Head Mounted Display) screen for 360 video quality evaluation. As a method for high-quality video transmission based on a user's viewport, this paper introduces MCTS (Motion Constrained Tile Sets) technique for solving the motion reference problem and EIS (Extraction Information Sets) SEI including pre-configured tile information, and extractor that extracts tiles. In addition, it explains tile extraction method based on user's viewport and implementation contents of the method of expressing on an HMD. Therefore, if 360 video is transferred by the proposed implementation which only transfers video from the user viewport area, it is possible to express higher quality video with lower bandwidth while avoiding unnecessary image transmission.

Real-Time Panoramic Video Streaming Technique with Multiple Virtual Cameras (다중 가상 카메라의 실시간 파노라마 비디오 스트리밍 기법)

  • Ok, Sooyol;Lee, Suk-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.538-549
    • /
    • 2021
  • In this paper, we introduce a technique for 360-degree panoramic video streaming with multiple virtual cameras in real-time. The proposed technique consists of generating 360-degree panoramic video data by ORB feature point detection, texture transformation, panoramic video data compression, and RTSP-based video streaming transmission. Especially, the generating process of 360-degree panoramic video data and texture transformation are accelerated by CUDA for complex processing such as camera calibration, stitching, blending, encoding. Our experiment evaluated the frames per second (fps) of the transmitted 360-degree panoramic video. Experimental results verified that our technique takes at least 30fps at 4K output resolution, which indicates that it can both generates and transmits 360-degree panoramic video data in real time.

Real-Time Compressed Video Acquisition System for Stereo 360 VR (Stereo 360 VR을 위한 실시간 압축 영상 획득 시스템)

  • Choi, Minsu;Paik, Joonki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.965-973
    • /
    • 2019
  • In this paper, Stereo 4K@60fps 360 VR real-time video capture system which consists of video stream capture, video encoding and stitching module is been designed. The system captures stereo 4K@60fps 360 VR video by stitching 6 of 2K@60fps stream which are captured through HDMI interface from 6 cameras in real-time. In video capture phase, video is captured from each camera using multi-thread in real-time. In video encoding phase, raw frame memory transmission and parallel encoding are used to reduce the resource usage in data transmission between video capture and video stitching modules. In video stitching phase, Real-time stitching is secured by stitching calibration preprocessing.

A Study on Projection Conversion for Efficient 3DoF+ 360-Degree Video Streaming

  • Jeong, Jong-Beom;Lee, Soonbin;Jang, Dongmin;Kim, Sungbin;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.7
    • /
    • pp.1209-1220
    • /
    • 2019
  • The demand for virtual reality (VR) is rapidly increasing. Providing the immersive experience requires much operation and many data to transmit. For example, a 360-degree video (360 video) with at least 4K resolution is needed to offer an immersive experience to users. Moreover, the MPEG-I group defined three degrees of freedom plus (3DoF+), and it requires the transmission of multiview 360 videos simultaneoulsy. This could be a burden for the VR streaming system. Accordingly, in this work, a bitrate-saving method using projection conversion is introduced, along with experimental results for streaming 3DoF+ 360 video. The results show that projection conversion of 360 video with 360lib shows a Bjontegaard delta bitrate gain of as much as 11.4%.

Implementing 3DoF+ 360 Video Compression System for Immersive Media (실감형 미디어를 위한 3DoF+ 360 비디오 압축 시스템 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Jang, Dongmin;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.743-754
    • /
    • 2019
  • System for three degrees of freedom plus (3DoF+) and 6DoF requires multi-view high resolution 360 video transmission to provide user viewport adaptive 360 video streaming. In this paper, we implement 3DoF+ 360 video compression system which removes the redundancy between multi-view videos and merges the residual into one video to provide high quality 360 video corresponding to an user's head movement efficiently. Implementations about 3D warping based redundancy removal method between 3DoF+ 360 videos and residual extraction and merger are explained in this paper. With the proposed system, 20.14% of BD-rate reduction in maximum is shown compared to traditional high-efficiency video coding (HEVC) based system.

Tile-Based 360 Degree Video Streaming System with User's gaze Prediction (사용자 시선 예측을 통한 360 영상 타일 기반 스트리밍 시스템)

  • Lee, Soonbin;Jang, Dongmin;Jeong, Jong-Beom;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1053-1063
    • /
    • 2019
  • Recently, tile-based streaming that transmits one 360 video in several tiles, is actively being studied in order to transmit these 360 video more efficiently. In this paper, for the transmission of high-definition 360 video corresponding to user's viewport in tile-based streaming scenarios, a system of assigning the quality of tiles at each tile by applying the saliency map generated by existing network models is proposed. As a result of usage of Motion-Constrained Tile Set (MCTS) technique to encode each tile independently, the user's viewport was rendered and tested based on Salient360! dataset, streaming 360 video based on the proposed system results in gain to 23% of the user's viewport compared to using the existing high-efficiency video coding (HEVC).

Efficient Transmission Scheme with Viewport Prediction of 360VR Content using Sound Location Information (360VR 콘텐츠의 음원위치정보를 활용한 시점예측 전송기법)

  • Jeong, Eunyoung;Kim, Dong Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1002-1012
    • /
    • 2019
  • 360VR content requires short latency, such as immediate response to viewers' viewport changes and high quality video delivery. It is necessary to consider efficient transmission that guarantees the QoE(Quality of Experience) of the 360VR contents with limited bandwidth. Several research has been introduced to reduce overall bandwidth consumption by predicting a user's viewport and allocating different bit rates to the area corresponding to the viewport. In this paper, we propose novel viewport prediction scheme that uses sound source location information of 360VR contents as auditory recognition information along with visual recognition information. Also, we propose efficient transmission algorithm by allocating a bit rate properly based on improved viewport prediction. The proposed scheme improves the accuracy of the viewport prediction and provides high quality videos to tiles corresponding to the user's viewpoint within the limited bandwidth.

Fast Algorithm for 360-degree Videos Based on the Prediction of Cu Depth Range and Fast Mode Decision

  • Zhang, Mengmeng;Zhang, Jing;Liu, Zhi;Mao, Fuqi;Yue, Wen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3165-3181
    • /
    • 2019
  • Spherical videos, which are also called 360-degree videos, have become increasingly popular due to the rapid development of virtual reality technology. However, the large amount of data in such videos is a huge challenge for existing transmission system. To use the existing encode framework, it should be converted into a 2D image plane by using a specific projection format, e.g. the equi-rectangular projection (ERP) format. The existing high-efficiency video coding standard (HEVC) can effectively compress video content, but its enormous computational complexity makes the time spent on compressing high-frame-rate and high-resolution 360-degree videos disproportionate to the benefits of compression. Focusing on the ERP format characteristics of 360-degree videos, this work develops a fast decision algorithm for predicting the coding unit depth interval and adaptive mode decision for intra prediction mode. The algorithm makes full use of the video characteristics of the ERP format by dealing with pole and equatorial areas separately. It sets different reference blocks and determination conditions according to the degree of stretching, which can reduce the coding time while ensuring the quality. Compared with the original reference software HM-16.16, the proposed algorithm can reduce time consumption by 39.3% in the all-intra configuration, and the BD-rate increases by only 0.84%.

3DoF+ 360 Video Projection Conversion for Saving Transmission Bitrates (3DoF+ 360 비디오 전송 비트레이트 절감을 위한 프로젝션 변경)

  • Jeong, JongBeom;Jang, Dongmin;Kim, Ju-Hyeong;Lee, Soonbin;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.170-173
    • /
    • 2018
  • 최근 360 비디오를 지원하는 가상현실 시스템에 대한 수요가 높아지면서, 사용자의 편의를 위해 다양한 방법이 제안되고 있다. Moving Picture Experts Group (MPEG) 에서는 제한적인 사용자 경험을 제공하는 3DoF 를 넘어 3DoF+, 6DoF 표준을 진행하고 있고, 이에 따른 많은 연구도 활발히 진행되고 있다. 사용자가 앉아있는 상태에서 머리의 움직임에 따라 제한적인 자유도를 제공하는 3DoF+ 시스템은 여러 고해상도의 360 비디오 전송을 요구하여 네트워크 대역폭에 상당한 부담을 준다. 본 논문은 3DoF+ 360 비디오 전송 시 대역폭의 효율적 사용을 위한 비트레이트 절감 방안을 제안한다. 이를 위해, 본 논문은 360 비디오의 프로젝션을 변경하여 해상도를 줄이면서도 정보 손실을 최소화할 수 있는 방법을 제시하고 결과를 설명한다. 프로젝션 변경을 위해 360 라이브러리를 사용하였고, 인코딩과 디코딩 시 효율 측정을 위해 HEVC Test Model (HM)을 사용하였다. 최종적으로 구현된 시스템은 360 비디오를 최적의 프로젝션으로 변환 후 인코딩, 디코딩을 거치고 다시 360 비디오로 변환하는 과정을 지원한다.

  • PDF

A Development of QUIC-DASH-based 360VR Video Transmission System for Bandwidth Reduction (대역폭 감소를 위한 QUIC-DASH 기반 360VR 영상 전송 시스템 개발)

  • Song, Minjeong;Yoo, Sung-geun;Park, Sang-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.18-21
    • /
    • 2018
  • 최근 시청자에게 실재적인 몰입감을 제공하는 실감미디어가 발전하고 있다. 이러한 실감미디어 중 가장 접근성이 뛰어난 것은 VR으로, 현재 다양한 서비스에 응용되고 있는 상황이다. 하지만 360VR은 높은 비트 전송률과 고해상도의 특성을 지니고 있기 때문에 대역폭 비용 및 시청자의 QoE 보장의 불확실성 등의 다양한 문제가 있다. 이러한 문제를 개선시키기 위해 HTTP 기반의 적응적 스트리밍 기법이 발전해나가고 있으며 이 기술의 표준으로 MPEG-DASH가 채택되었다. MPEG-DASH는 TCP 기반의 전송 프로토콜을 사용하고 있지만 현재 TCP는 고용량의 데이터가 전송되는 웹 기반에서 HTTP의 병목현상을 일으켜 대역폭 효율성을 떨어뜨리는 하나의 원인으로 나타나고 있다. 이러한 문제를 해결하기 위해 본 논문에서는 UDP 기반의 QUIC 프로토콜을 MPEG-DASH에 적용하는 시스템을 고안하고 이를 QUIC-DASH라 칭한다. 고해상도의 360VR 송출 실험으로 QUIC-DASH 시스템과 기존의 MPEG-DASH 시스템을 비교 분석한 결과로 대역폭의 절감이 이루어진 것을 확인하였다.

  • PDF