• Title/Summary/Keyword: 360 영상

Search Result 333, Processing Time 0.031 seconds

A Matching Method of Recommendations Advertisements by Extracting Immersive 360-degree Video Object (실감형 360도 영상저작물 객체 추출을 통한 추천광고 매칭방법)

  • Jang, Seyoung;Park, Byeongchan;Kim, Youngmo;Yoo, Injae;Lee, Jeacheng;Kim, Seok-Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.231-233
    • /
    • 2020
  • 최근 360도 형태로 영상을 촬영하고 제공하는 경우가 많아 일반적인 동영상과 달리 360도 형태의 영상저작물에 적절하고 효과적인 방법으로 광고를 삽입하여 노출 시킬 수 있는 방법이 필요하게 되었다. 따라서 본 논문에서는 실감형 360도 영상저작물 객체 추출을 통한 추천 광고 매칭방법을 제안한다. 360도 영상저작물 내에 광고를 매칭하고 추출된 객체와 연관된 광고를 추출하여 해당 프레임에 자동으로 삽입 노출이 가능하도록 하는 방법으로 이 방법을 이용함으로써 사용자의 현재 시점 영역 내에 광고 영상이 노출되도록 광고의 삽입 위치를 이동시켜 영상이 재생되도록 하거나, 광고 영상이 삽입된 좌표로 사용자의 현재 시점을 이동시켜 영상이 재생되게 할 수 있다.

  • PDF

Tile-Based 360 Degree Video Streaming System with User's gaze Prediction (사용자 시선 예측을 통한 360 영상 타일 기반 스트리밍 시스템)

  • Lee, Soonbin;Jang, Dongmin;Jeong, Jong-Beom;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1053-1063
    • /
    • 2019
  • Recently, tile-based streaming that transmits one 360 video in several tiles, is actively being studied in order to transmit these 360 video more efficiently. In this paper, for the transmission of high-definition 360 video corresponding to user's viewport in tile-based streaming scenarios, a system of assigning the quality of tiles at each tile by applying the saliency map generated by existing network models is proposed. As a result of usage of Motion-Constrained Tile Set (MCTS) technique to encode each tile independently, the user's viewport was rendered and tested based on Salient360! dataset, streaming 360 video based on the proposed system results in gain to 23% of the user's viewport compared to using the existing high-efficiency video coding (HEVC).

Arrangement of narrative events and background in the contents of VR 360 video (VR 360 영상 콘텐츠에서의 서사적 사건 및 배경의 배치)

  • Lee, You-Na;Park, Jin-Wan
    • Journal of Digital Contents Society
    • /
    • v.19 no.9
    • /
    • pp.1631-1639
    • /
    • 2018
  • VR 360 video contents requires new visual language research in that the viewer inevitably makes partial appreciation unlike traditional video contents. In this study, we paid attention to the fact that arrangement of events and background elements in the 360-degree extended background of VR 360 video contents will play a major role in guiding the audience. Therefore, this study focuses on the arrangement of events and background elements from a narrative point of view, and analyzed the aspects of VR 360 video contents cases.

A study on lighting angle for improvement of 360 degree video quality in metaverse (메타버스에서 360° 영상 품질향상을 위한 조명기 투사각연구)

  • Kim, Joon Ho;An, Kyong Sok;Choi, Seong Jhin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.499-505
    • /
    • 2022
  • Recently, the metaverse has been receiving a lot of attention. Metaverse means a virtual space, and various events can be held in this space. In particular, 360-degree video, a format optimized for the metaverse space, is attracting attention. A 360-degree video image is created by stitching images taken with multiple cameras or lenses in all 360-degree directions. When shooting a 360-degree video, a variety of shooting equipment, including a shooting staff to take a picture of a subject in front of the camera, is displayed on the video. Therefore, when shooting a 360-degree video, you have to hide everything except the subject around the camera. There are several problems with this shooting method. Among them, lighting is the biggest problem. This is because it is very difficult to install a fixture that focuses on the subject from behind the camera as in conventional image shooting. This study is an experimental study to find the optimal angle for 360-degree images by adjusting the angle of indoor lighting. We propose a method to record 360-degree video without installing additional lighting. Based on the results of this study, it is expected that experiments will be conducted through more various shooting angles in the future, and furthermore, it is expected that it will be helpful when using 360-degree images in the metaverse space.

An Efficient Algorithm for Mapping 360° Circular Images to Planar Images (360° 원형영상을 평면영상에 매핑하기 위한 효율적인 알고리즘)

  • Lee, Young-Ji;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.22 no.1
    • /
    • pp.68-73
    • /
    • 2018
  • In this paper, we propose an efficient algorithm for mapping a $360^{\circ}$ circular image to a planar image. The proposed algorithm consists of obtaining size of the planar image, calculating the distance between the camera and the planar image, calculating horizontal angle of camera and planar image, calculating vertical angle between camera and planar image, calculating the position of a pixel that matches pixels in a $360^{\circ}$ circular image to pixels in a planar image. Experiments were performed to evaluate the efficient algorithm for mapping the proposed $360^{\circ}$ circular image to the plane image. The reconstruction rate of the mapped plane image was confirmed 99% and the image quality of the mapped plane image was confirmed 72%. Since the results were higher than the standard values of commercial software, the effectiveness of the algorithm was confirmed.

360 RGBD Image Synthesis from a Sparse Set of Images with Narrow Field-of-View (소수의 협소화각 RGBD 영상으로부터 360 RGBD 영상 합성)

  • Kim, Soojie;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.487-498
    • /
    • 2022
  • Depth map is an image that contains distance information in 3D space on a 2D plane and is used in various 3D vision tasks. Many existing depth estimation studies mainly use narrow FoV images, in which a significant portion of the entire scene is lost. In this paper, we propose a technique for generating 360° omnidirectional RGBD images from a sparse set of narrow FoV images. The proposed generative adversarial network based image generation model estimates the relative FoV for the entire panoramic image from a small number of non-overlapping images and produces a 360° RGB and depth image simultaneously. In addition, it shows improved performance by configuring a network reflecting the spherical characteristics of the 360° image.

Luminance Compensation using Feature Points and Histogram for VR Video Sequence (특징점과 히스토그램을 이용한 360 VR 영상용 밝기 보상 기법)

  • Lee, Geon-Won;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.808-816
    • /
    • 2017
  • 360 VR video systems has become important to provide immersive effect for viewers. The system consists of stitching, projection, compression, inverse projection, viewport extraction. In this paper, an efficient luminance compensation technique for 360 VR video sequences, where feature extraction and histogram equalization algorithms are utilized. The proposed luminance compensation algorithm enhance the performance of stitching in 360 VR system. The simulation results showed that the proposed technique is useful to increase the quality of the displayed image.

A Study on the High Quality 360 VR Tiled Video Edge Streaming (방송 케이블 망 기반 고품질 360 VR 분할 영상 엣지 스트리밍에 관한 연구)

  • Kim, Hyun-Wook;Yang, Jin-Wook;Yoon, Sang-Pil;Jang, Jun-Hwan;Park, Woo-Chool
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.12
    • /
    • pp.43-52
    • /
    • 2019
  • 360 Virtual Reality(VR) service is getting attention in the domestic streaming market as 5G era is upcoming. However, existing IPTV-based 360 VR video services use upto 4K 360 VR video which is not enough to satisfy customers. It is generally required that over 8K resolution is necessary to meet users' satisfaction level. The bit rate of 8K resolution video exceeds the bandwidth of single QAM channel(38.817mbps), which means that it is impossible to provide 8K resolution video via the IPTV broadcast network environment. Therefore, we suggest and implement the edge streaming system for low-latency streaming to the display devices in the local network. We conducted experiments and confirmed that 360 VR streaming with a viewport switching delay less than 500ms can be achieved while using less than 100mbps of the network bandwidth.

Implementing Multiple-tile Extractor for Viewport-dependent 360 Video Streaming (사용자 시점 기반 360 도 영상 스트리밍을 위한 다중 타일 추출기 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Kim, Inae;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.53-56
    • /
    • 2020
  • 몰입감 있는 가상 현실 영상을 제공하기 위한 360 도 영상 부호화 및 전송 기술이 활발히 연구되고 있으나, 현재 가상현실 장비가 사용가능한 연산 능력 및 대역폭으로는 몰입감 있는 영상을 전송 및 재생하기에 한계가 있다. 따라서 본 논문은 고화질 360 도 사용자 시점 영상 제공을 위해 사용자 시점 타일을 추출하는 움직임 제한 타일 셋 기반 타일 추출기를 구현한다. 기존의 high-efficiency video coding (HEVC) 에서 구현되었던 타일 추출기와 달리 제안하는 추출기는 360 도 영상에 대한 비트스트림에서 여러 개의 타일을 추출한다. 이후 추출된 타일들은 전체 360 도 영상에 대한 저화질 비트스트림과 동시 전송되어 예상치 못한 사용자 시점 변경에 대응한다.

  • PDF

A Reference Frame Extraction Method for 360-degree Video Identification by Measuring RGB Displacement Values (RGB 변위값 측정을 통한 360도 영상 식별 기준 프레임 추출 방법)

  • Yoo, Injae;Lee, Jeacheng;Jang, Seyoung;Park, Byeongchan;Kim, Youngmo;Kim, Seok-Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.419-420
    • /
    • 2020
  • 본 논문에서는 불법복제 영상 판단을 위한 RGB 변위값 측정을 통한 360도 영상 식별 기준 키 프레임 선정 방법을 제안한다. 방송 프로그램이나 영화 등과 같은 콘텐츠는 인터넷들을 통하여 국내뿐만 아니라 해외로도 대량 불법 유통됨으로써 국가적으로 큰 손실이 발생하고 있다. 본 논문에서는 이러한 불법복제 여부를 빠른 속도로 판단하기 위한 방법으로 360도 영상에서 추출된 각각의 프레임에서 RGB 변위값을 측정하여 동일한 장면으로 인식되는 프레임을 하나로 묶어 해당 장면의 키 프레임으로 선정한다. 본 논문에서 제안한 방법은 불법복제 영상의 판단 시간을 단축시키고 판단 정확도를 향상시킬 수 있는 효과가 있다.

  • PDF