• 제목/요약/키워드: Video representation

검색결과 195건 처리시간 0.026초

Video Quality Representation Classification of Encrypted HTTP Adaptive Video Streaming

  • Dubin, Ran;Hadar, Ofer;Dvir, Amit;Pele, Ofir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권8호
    • /
    • pp.3804-3819
    • /
    • 2018
  • The increasing popularity of HTTP adaptive video streaming services has dramatically increased bandwidth requirements on operator networks, which attempt to shape their traffic through Deep Packet inspection (DPI). However, Google and certain content providers have started to encrypt their video services. As a result, operators often encounter difficulties in shaping their encrypted video traffic via DPI. This highlights the need for new traffic classification methods for encrypted HTTP adaptive video streaming to enable smart traffic shaping. These new methods will have to effectively estimate the quality representation layer and playout buffer. We present a new machine learning method and show for the first time that video quality representation classification for (YouTube) encrypted HTTP adaptive streaming is possible. The crawler codes and the datasets are provided in [43,44,51]. An extensive empirical evaluation shows that our method is able to independently classify every video segment into one of the quality representation layers with 97% accuracy if the browser is Safari with a Flash Player and 77% accuracy if the browser is Chrome, Explorer, Firefox or Safari with an HTML5 player.

Graphical Video Representation for Scalability

  • Jinzenji, Kumi;Kasahara, Hisashi
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 1996년도 Proceedings International Workshop on New Video Media Technology
    • /
    • pp.29-34
    • /
    • 1996
  • This paper proposes a new concept in video called Graphical Video. Graphical Video is a content-based and scalable video representation. A video consists of several elements such as moving images, still images, graphics, characters and charts. All of these elements can be represented graphically except moving images. It is desirable to transform these moving images graphical elements so that they can be treated in the same way as other graphical elements. To achieve this, we propose a new graphical representation of moving images using spatio-temporal clusters, which consist of texture and contours. The texture is described by three-dimensional fractal coefficients, while the contours are described by polygons. We propose a method that gives domain pool location and size as a means to describe cluster texture within or near a region of clusters. Results of an experiment on texture quality confirm that the method provides sufficiently high SNR as compared to that in the original three-dimensional fractal approximation.

  • PDF

Compressed Representation of Neural Networks for Use Cases of Video/Image Compression in MPEG-NNR

  • Moon, Hyeoncheol;Kim, Jae-Gon
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2018년도 추계학술대회
    • /
    • pp.133-134
    • /
    • 2018
  • MPEG-NNR (Compressed Representation of Neural Networks) aims to define a compressed and interoperable representation of trained neural networks. In this paper, a compressed representation of NN and its evaluation performance along with use cases of image/video compression in MPEG-NNR are presented. In the compression of NN, a CNN to replace the in-loop filter in VVC (Versatile Video Coding) intra coding is compressed by applying uniform quantization to reduce the trained weights, and the compressed CNN is evaluated in terms of compression ratio and coding efficiency compared to the original CNN. Evaluation results show that CNN could be compressed to about quarter with negligible coding loss by applying simple quantization to the trained weights.

  • PDF

의미 기반 주석을 이용한 비디오 검색 시스템의 설계 및 구현 (Design And Implementation of Video Retrieval System for Using Semantic-based Annotation)

  • 홍수열
    • 한국컴퓨터정보학회논문지
    • /
    • 제5권3호
    • /
    • pp.99-105
    • /
    • 2000
  • 비디오는 broadcasting, 교육, 출판과 군사 등 다양한 응용들과 함께 멀티미디어 컴퓨팅과 통신 환경의 중요한 요소가 되었다. 멀티미디어 데이터 검색을 위한 효과적인 방법의 필요성은 대용량의 멀티미디어 응용들에서 날로 증가하고 있다. 따라서, 비디오 데이터의 검색과 표현은 비디오 데이터베이스에서 주요 연구 이슈 중에 하나가 되었다. 비디오 데이터의 표현 방법으로 주로 2가지 접근 방법이 있다: (1) 내용 기반 비디오 검색 과 (2) 주석 기반 비디오 검색. 이 논문은 의미 기반 주석을 이용한 비디오 검색 시스템을 설계하고 구현한다.

  • PDF

더블린 코아 모델을 이용한 비디오 데이터의 표현 (Representation of Video Data using Dublin core Model)

  • 이순희;김상호;신정훈;김길준;류근호
    • 정보처리학회논문지D
    • /
    • 제9D권4호
    • /
    • pp.531-542
    • /
    • 2002
  • 지금까지 대부분의 메타데이터들은 응용 분야에 제한된 부분만을 주로 취급하였다. 그러나 동일한 비디오 데이터를 표현하기 위해서는 동일한 형태의 메타데이터가 필요하고, 이때 비디오 데이터베이스에서 동일한 비디오 데이터에 대하여 서로 다른 여러 개의 메타데이터를 지원해야 하는 문제가 발생한다. 이 논문에서는 이러한 문제를 해결하기 위하여 더블린 코어 모델을 확장하였다. 제안된 비디오데이터 표현에서는 더블린 코아 모델을 확장한 메타데이터가 비디오 데이터의 구조, 내용 및 조작에 관한 정보를 관리하도록 하였다. 제안된 메타 데이터는 시스템 관리 부분과 사용자 정의 부분을 분리함으로써 응용 분야에 독립적인 모델구축이 가능하다. 13개의 시간 관계 연산은 더미 샷의 시간 변환 관계를 사용하여 6개로 감소시켰다. 이 감소된 6개의 연산은 역관계를 배제시켜 표현의 일관성을 유지시키고, n-ary시간 관계의 샷들을 이진관계로 변환시킨다. 그리고 실제 응용 분야에 적용하고 실험하여 확장된 더블린 코어 모델이 응용 분야에 동일한 구조로 메타데이터를 표현하고 동일한 방법으로 검색할 수 있음을 증명하였다.

Representing Navigation Information on Real-time Video in Visual Car Navigation System

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • 대한원격탐사학회지
    • /
    • 제23권5호
    • /
    • pp.365-373
    • /
    • 2007
  • Car navigation system is a key application in geographic information system and telematics. A recent trend of car navigation system is using real video captured by camera equipped on the vehicle, because video has more representation power about real world than conventional map. In this paper, we suggest a visual car navigation system that visually represents route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid directly on the video. The system integrates real-time data acquisition, conventional route finding and guidance, computer vision, and augmented reality display. We also designed visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to current location and driving circumstances. We briefly show implementation of the system.

계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술 (H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image)

  • 신종홍;지인호
    • 한국인터넷방송통신학회논문지
    • /
    • 제14권2호
    • /
    • pp.43-51
    • /
    • 2014
  • 깊이 영상을 고려한 다시점 비디오는 매우 많은 양의 데이터 때문에 저장과 전송을 위해서 새로운 부호화 압축 기술 개발이 요구된다. 계층적 깊이 영상은 다시점 비디오의 효과적인 표현방법이 된다. 이 방법은 다시점 칼라와 깊이 영상을 합성하는 데이터 구조를 만들어 준다. 이 새로운 콘텐츠를 효과적으로 압축하는 방법으로 3차원 워핑을 이용한 계층적 깊이 영상 표현과 비디오 압축 부호화를 적용하는 방법을 제안하였다. 이 논문은 계층적 영상 표현을 사용한 H.264/AVC 비디오 부호화 기술의 개선된 압축 방법을 제시하여 준다. 컴퓨터 모의시험으로 좋은 압축율과 좋은 성능의 회복 영상을 얻을 수 있음을 제시하였다.

자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화 (Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV)

  • 신종홍
    • 디지털산업정보학회논문지
    • /
    • 제7권2호
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Robust Online Object Tracking with a Structured Sparse Representation Model

  • Bo, Chunjuan;Wang, Dong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권5호
    • /
    • pp.2346-2362
    • /
    • 2016
  • As one of the most important issues in computer vision and image processing, online object tracking plays a key role in numerous areas of research and in many real applications. In this study, we present a novel tracking method based on the proposed structured sparse representation model, in which the tracked object is assumed to be sparsely represented by a set of object and background templates. The contributions of this work are threefold. First, the structure information of all the candidate samples is utilized by a joint sparse representation model, where the representation coefficients of these candidates are promoted to share the same sparse patterns. This representation model can be effectively solved by the simultaneous orthogonal matching pursuit method. In addition, we develop a tracking algorithm based on the proposed representation model, a discriminative candidate selection scheme, and a simple model updating method. Finally, we conduct numerous experiments on several challenging video clips to evaluate the proposed tracker in comparison with various state-of-the-art tracking algorithms. Both qualitative and quantitative evaluations on a number of challenging video clips show that our tracker achieves better performance than the other state-of-the-art methods.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2007년도 Proceedings of ISRS 2007
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF