• Title/Summary/Keyword: quadtree plus binary tree

Search Result 4, Processing Time 0.015 seconds

Performance Analysis of Future Video Coding (FVC) Standard Technology

  • Choi, Young-Ju;Kim, Ji-Hae;Lee, Jong-Hyeok;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.2
    • /
    • pp.73-78
    • /
    • 2017
  • The Future Video Coding (FVC) is a new state of the art video compression standard that is going to standardize, as the next generation of High Efficiency Video Coding (HEVC) standard. The FVC standard applies newly designed block structure, which is called quadtree plus binary tree (QTBT) to improve the coding efficiency. Also, intra and inter prediction parts were changed to improve the coding performance when comparing to the previous coding standard such as HEVC and H.264/AVC. Experimental results shows that we are able to achieve the average BD-rate reduction of 25.46%, 38.00% and 35.78% for Y, U and V, respectively. In terms of complexity, the FVC takes about 14 times longer than the consumed time of HEVC encoder.

A Fast Decision Method of Quadtree plus Binary Tree (QTBT) Depth in JEM (차세대 비디오 코덱(JEM)의 고속 QTBT 분할 깊이 결정 기법)

  • Yoon, Yong-Uk;Park, Do-Hyun;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.541-547
    • /
    • 2017
  • The Joint Exploration Model (JEM), which is a reference SW codec of the Joint Video Exploration Team (JVET) exploring the future video standard technology, provides a recursive Quadtree plus Binary Tree (QTBT) block structure. QTBT can achieve enhanced coding efficiency by adding new block structures at the expense of largely increased computational complexity. In this paper, we propose a fast decision algorithm of QTBT block partitioning depth that uses the rate-distortion (RD) cost of the upper and current depth to reduce the complexity of the JEM encoder. Experimental results showed that the computational complexity of JEM 5.0 can be reduced up to 21.6% and 11.0% with BD-rate increase of 0.7% and 1.2% in AI (All Intra) and RA (Random Access), respectively.

Adaptive block tree structure for video coding

  • Baek, Aram;Gwon, Daehyeok;Son, Sohee;Lee, Jinho;Kang, Jung-Won;Kim, Hui Yong;Choi, Haechul
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.313-323
    • /
    • 2021
  • The Joint Video Exploration Team (JVET) has studied future video coding (FVC) technologies with a potential compression capacity that significantly exceeds that of the high-efficiency video coding (HEVC) standard. The joint exploration test model (JEM), a common platform for the exploration of FVC technologies in the JVET, employs quadtree plus binary tree block partitioning, which enhances the flexibility of coding unit partitioning. Despite significant improvement in coding efficiency for chrominance achieved by separating luminance and chrominance tree structures in I slices, this approach has intrinsic drawbacks that result in the redundancy of block partitioning data. In this paper, an adaptive tree structure correlating luminance and chrominance of single and dual trees is presented. Our proposed method resulted in an average reduction of -0.24% in the Y Bjontegaard Delta rate relative to the intracoding of JEM 6.0 common test conditions.

CNN-based In-loop Filter on TU Block (TU 블록 크기에 따른 CNN기반 인루프필터)

  • Kim, Yang-Woo;Jeong, Seyoon;Cho, Seunghyun;Lee, Yung-Lyul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.15-17
    • /
    • 2018
  • VVC(Versatile Video Coding)는 입력된 영상을 CTU(Coding Tree Unit) 단위로 분할하여 코딩하며, 이를 다시 QTBTT(Quadtree plus binary tree and triple tree)로 분할하고, TU(Transform Unit)도 이와 같은 단위로 분할된다. 따라서 TU의 크기는 $4{\times}4$, $4{\times}8$, $4{\times}16$, $4{\times}32$, $8{\times}4$, $16{\times}4$, $32{\times}4$, $8{\times}8$, $8{\times}16$, $8{\times}32$, $16{\times}8$, $32{\times}8$, $16{\times}16$, $16{\times}32$, $32{\times}16$, $32{\times}32$, $64{\times}64$의 17가지 종류가 있다. 기존의 VVC 참조 Software인 VTM에서는 디블록킹필터와 SAO(Sample Adaptive Offset)로 이루어진 인루프필터를 이용하여 에러를 복원하는데, 본 논문은 TU 크기에 따라서 원본블록과 복원블록의 차이(에러)가 통계적으로 다름을 이용하여 서로 다른 CNN(Convolution Neural Network)을 구축하고 에러를 복원하는 방법으로 VTM의 인루프 필터를 대체한다. 복원영상의 에러를 감소시키기 위하여 TU 블록크기에 따라 DenseNet의 Dense Block기반 CNN을 구성하고, Hyper Parameter와 복잡도의 감소를 위해 네트워크 간에 일부 가중치를 공유하는 모양의 Network를 구성하였다.

  • PDF