• Title/Summary/Keyword: 3D Shearlet Transform

Search Result 2, Processing Time 0.018 seconds

No-Reference Sports Video-Quality Assessment Using 3D Shearlet Transform and Deep Residual Neural Network (3차원 쉐어렛 변환과 심층 잔류 신경망을 이용한 무참조 스포츠 비디오 화질 평가)

  • Lee, Gi Yong;Shin, Seung-Su;Kim, Hyoung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1447-1453
    • /
    • 2020
  • In this paper, we propose a method for no-reference quality assessment of sports videos using 3D shearlet transform and deep residual neural networks. In the proposed method, 3D shearlet transform-based spatiotemporal features are extracted from the overlapped video blocks and applied to logistic regression concatenated with a deep residual neural network based on a conditional video block-wise constraint to learn the spatiotemporal correlation and predict the quality score. Our evaluation reveals that the proposed method predicts the video quality with higher accuracy than the conventional no-reference video quality assessment methods.

No-reference quality assessment of dynamic sports videos based on a spatiotemporal motion model

  • Kim, Hyoung-Gook;Shin, Seung-Su;Kim, Sang-Wook;Lee, Gi Yong
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.538-548
    • /
    • 2021
  • This paper proposes an approach to improve the performance of no-reference video quality assessment for sports videos with dynamic motion scenes using an efficient spatiotemporal model. In the proposed method, we divide the video sequences into video blocks and apply a 3D shearlet transform that can efficiently extract primary spatiotemporal features to capture dynamic natural motion scene statistics from the incoming video blocks. The concatenation of a deep residual bidirectional gated recurrent neural network and logistic regression is used to learn the spatiotemporal correlation more robustly and predict the perceptual quality score. In addition, conditional video block-wise constraints are incorporated into the objective function to improve quality estimation performance for the entire video. The experimental results show that the proposed method extracts spatiotemporal motion information more effectively and predicts the video quality with higher accuracy than the conventional no-reference video quality assessment methods.