• Title/Summary/Keyword: Video representation

Search Result 195, Processing Time 0.034 seconds

Video Quality Representation Classification of Encrypted HTTP Adaptive Video Streaming

  • Dubin, Ran;Hadar, Ofer;Dvir, Amit;Pele, Ofir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3804-3819
    • /
    • 2018
  • The increasing popularity of HTTP adaptive video streaming services has dramatically increased bandwidth requirements on operator networks, which attempt to shape their traffic through Deep Packet inspection (DPI). However, Google and certain content providers have started to encrypt their video services. As a result, operators often encounter difficulties in shaping their encrypted video traffic via DPI. This highlights the need for new traffic classification methods for encrypted HTTP adaptive video streaming to enable smart traffic shaping. These new methods will have to effectively estimate the quality representation layer and playout buffer. We present a new machine learning method and show for the first time that video quality representation classification for (YouTube) encrypted HTTP adaptive streaming is possible. The crawler codes and the datasets are provided in [43,44,51]. An extensive empirical evaluation shows that our method is able to independently classify every video segment into one of the quality representation layers with 97% accuracy if the browser is Safari with a Flash Player and 77% accuracy if the browser is Chrome, Explorer, Firefox or Safari with an HTML5 player.

Graphical Video Representation for Scalability

  • Jinzenji, Kumi;Kasahara, Hisashi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06b
    • /
    • pp.29-34
    • /
    • 1996
  • This paper proposes a new concept in video called Graphical Video. Graphical Video is a content-based and scalable video representation. A video consists of several elements such as moving images, still images, graphics, characters and charts. All of these elements can be represented graphically except moving images. It is desirable to transform these moving images graphical elements so that they can be treated in the same way as other graphical elements. To achieve this, we propose a new graphical representation of moving images using spatio-temporal clusters, which consist of texture and contours. The texture is described by three-dimensional fractal coefficients, while the contours are described by polygons. We propose a method that gives domain pool location and size as a means to describe cluster texture within or near a region of clusters. Results of an experiment on texture quality confirm that the method provides sufficiently high SNR as compared to that in the original three-dimensional fractal approximation.

  • PDF

Compressed Representation of Neural Networks for Use Cases of Video/Image Compression in MPEG-NNR

  • Moon, Hyeoncheol;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.133-134
    • /
    • 2018
  • MPEG-NNR (Compressed Representation of Neural Networks) aims to define a compressed and interoperable representation of trained neural networks. In this paper, a compressed representation of NN and its evaluation performance along with use cases of image/video compression in MPEG-NNR are presented. In the compression of NN, a CNN to replace the in-loop filter in VVC (Versatile Video Coding) intra coding is compressed by applying uniform quantization to reduce the trained weights, and the compressed CNN is evaluated in terms of compression ratio and coding efficiency compared to the original CNN. Evaluation results show that CNN could be compressed to about quarter with negligible coding loss by applying simple quantization to the trained weights.

  • PDF

Design And Implementation of Video Retrieval System for Using Semantic-based Annotation (의미 기반 주석을 이용한 비디오 검색 시스템의 설계 및 구현)

  • 홍수열
    • Journal of the Korea Society of Computer and Information
    • /
    • v.5 no.3
    • /
    • pp.99-105
    • /
    • 2000
  • Video has become an important element of multimedia computing and communication environments, with applications as varied as broadcasting, education, publishing, and military intelligence. The necessity of the efficient methods for multimedia data retrieval is increasing more and more on account of various large scale multimedia applications. According1y, the retrieval and representation of video data becomes one of the main research issues in video database. As for the representation of the video data there have been mainly two approaches: (1) content-based video retrieval, and (2) annotation-based video retrieval This paper designs and implements a video retrieval system for using semantic-based annotation.

  • PDF

Representation of Video Data using Dublin core Model (더블린 코아 모델을 이용한 비디오 데이터의 표현)

  • Lee, Sun-Hui;Kim, Sang-Ho;Sin, Jeong-Hun;Kim, Gil-Jun;Ryu, Geun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.531-542
    • /
    • 2002
  • As most of metadata have been handled on restricted applications, we need a same metadata in order to represent a same video data. However, these metadata make problems that the same video data should be supported by the same metadata. Therefore, in this paper, we extend the Dublin core elements to support the metadata which can solve the problems. The proposed video data representation is managed by the extended metadata of Doblin core model, by using the information of structure, content and manipulation of video data. The thirteen temporal relationship operators are reduced to the six temporal relationship operators by using a dummy shot temporal transformation relationship. The reduced six temporal relationship operators through excluding reverse temporal relationship not only maintain a consistency of representation between a metadata and a video data, but also transform n-ary temporal relationship to binary relationship on shots. We show that the proposed metadata model can be applied to representing and retrieving on various applications as equivalent as the same structure.

Representing Navigation Information on Real-time Video in Visual Car Navigation System

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.365-373
    • /
    • 2007
  • Car navigation system is a key application in geographic information system and telematics. A recent trend of car navigation system is using real video captured by camera equipped on the vehicle, because video has more representation power about real world than conventional map. In this paper, we suggest a visual car navigation system that visually represents route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid directly on the video. The system integrates real-time data acquisition, conventional route finding and guidance, computer vision, and augmented reality display. We also designed visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to current location and driving circumstances. We briefly show implementation of the system.

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Robust Online Object Tracking with a Structured Sparse Representation Model

  • Bo, Chunjuan;Wang, Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.5
    • /
    • pp.2346-2362
    • /
    • 2016
  • As one of the most important issues in computer vision and image processing, online object tracking plays a key role in numerous areas of research and in many real applications. In this study, we present a novel tracking method based on the proposed structured sparse representation model, in which the tracked object is assumed to be sparsely represented by a set of object and background templates. The contributions of this work are threefold. First, the structure information of all the candidate samples is utilized by a joint sparse representation model, where the representation coefficients of these candidates are promoted to share the same sparse patterns. This representation model can be effectively solved by the simultaneous orthogonal matching pursuit method. In addition, we develop a tracking algorithm based on the proposed representation model, a discriminative candidate selection scheme, and a simple model updating method. Finally, we conduct numerous experiments on several challenging video clips to evaluate the proposed tracker in comparison with various state-of-the-art tracking algorithms. Both qualitative and quantitative evaluations on a number of challenging video clips show that our tracker achieves better performance than the other state-of-the-art methods.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF