• Title/Summary/Keyword: Picture Partitioning

Search Result 12, Processing Time 0.025 seconds

Deep Learning based HEVC Double Compression Detection (딥러닝 기술 기반 HEVC로 압축된 영상의 이중 압축 검출 기술)

  • Uddin, Kutub;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1134-1142
    • /
    • 2019
  • Detection of double compression is one of the most efficient ways of remarking the validity of videos. Many methods have been introduced to detect HEVC double compression with different coding parameters. However, HEVC double compression detection under the same coding environments is still a challenging task in video forensic. In this paper, we introduce a novel method based on the frame partitioning information in intra prediction mode for detecting double compression in with the same coding environments. We propose to extract statistical feature and Deep Convolution Neural Network (DCNN) feature from the difference of partitioning picture including Coding Unit (CU) and Transform Unit (TU) information. Finally, a softmax layer is integrated to perform the classification of the videos into single and double compression by combing the statistical and the DCNN features. Experimental results show the effectiveness of the statistical and the DCNN features with an average accuracy of 87.5% for WVGA and 84.1% for HD dataset.

A New Data Partitioning of DCT Coefficients for Error-resilient Transmission of Video (비디오의 에러내성 전송을 위한 DCT 계수의 새로운 분할 기법)

  • Roh, Kyu-Chan;Kim, Jae-Kyoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.6
    • /
    • pp.585-590
    • /
    • 2002
  • In the typical data partitioning for error-resilient video coding, motion and macroblock header information is separated from the texture information. It can be an effective tool for the transmission of video over the error prone environment. For Intra-coded frames, however, the loss of DCT (discrete cosine transform) coefficients is fatal because there is no ther information to reconstruct the corrupted macroblocks by errors. For Inter-coded frames, when error occurs in DCT coefficients, the picture quality is degraded because all DCT coefficients are discarded in those packets. In this paper, we propose an efficient data partitioning and coding method for DCT-based error-resilient video. The quantized DCT coefficients are partitioned into the even-value approximation and the remainder parts. It is shown that the proposed algorithm provides a better quality of the high priority part than the conventional methods.

A Study on Hybrid Image Coder Using a Reconfigurable Multiprocessor System (Study II : Parallel Algorithm Implementation (재구성 가능한 다중 프로세서 시스템을 이용한 혼합 영상 부호화기 구현에 관한 연구(연구 II : 병렬 알고리즘 구현))

  • Choi, Sang-Hoon;Lee, Kwang-Kee;Kim, In;Lee, Yong-Kyun;Park, Kyu-Tae
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.10
    • /
    • pp.13-26
    • /
    • 1993
  • Motion picture algorithms are realized on the multiprocessor system presented in the Study I. For the most efficient processing of the algorithms, pipelining and geometrical parallel processing methods are employed, and processing time, communication load and efficiency of each algorithm are compared. The performance of the implemented system is compared and analysed with reference to MPEG coding algorithm. Theoretical calculations and experimental results both shows that geometrical partitioning is a more suitable parallel processing algorithm for moving picture coding having the advantage of easy algorithm modification and expansion, and the overall efficiency is higher than pipelining.

  • PDF

Multi-stream Delivery Method of the Video Data Based on SPIHT Wavelet (SPIHT 웨이브릿 기반의 비디오 데이터의 멀티스트림 전송 기법)

  • 강경원;류권열;김기룡;문광석;김문수
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.14-20
    • /
    • 2002
  • In this paper, we proposed the compression technique of the video data using SPIHT(set partitioning in hierarchical trees) based on wavelet and the multi-stream delivery method for best-effort service as fully utilizing the clients bandwidth over the current Internet. The experiment shows that the proposed method provides about 1.5dB better picture quality without block effects than DCT(discrete consine transform) based coding schemes at the same bit rates because of using the wavelet video coder. In addition, this technique implements the multi-stream transmission based on TCP(transmission control protocol). Thus, it is provided with the best-efforts service which is robust to the network jitter problem, and maximally utilizes the bandwidth of the client's.

  • PDF

Multiple Description Coding using Whitening Transform (Whitening Transform을 이용한 Multiple Description Coding)

  • 최광표;이근영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.41-44
    • /
    • 2002
  • In the communications systems with diversity, we are commonly faced on needing of new source coding technique, error resilient coding. The error resilient coding addresses the coding algorithm that has the robustness to unreliability of communications channel. In recent years, many error resilient coding techniques were proposed such as data partitioning, resynchronization, error detection, concealment, reference picture selection and multiple description coding (MDC). In this paper, we proposed an MDC using whitening transform. The conventional MDC using correlating transform is need additional information to decode the image. But, if an image is transformed using the whitening transform, the additional information is not necessary to transform because the coefficients of whitening transform have uni-variance statistics.

  • PDF

Multi-Layer Perceptron Based Ternary Tree Partitioning Decision Method for Versatile Video Coding (다목적 비디오 부/복호화를 위한 다층 퍼셉트론 기반 삼항 트리 분할 결정 방법)

  • Lee, Taesik;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.783-792
    • /
    • 2022
  • Versatile Video Coding (VVC) is the latest video coding standard, which had been developed by the Joint Video Experts Team (JVET) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) in 2020. Although VVC can provide powerful coding performance, it requires tremendous computational complexity to determine the optimal block structures during the encoding process. In this paper, we propose a fast ternary tree decision method using two neural networks with 7 nodes as input vector based on the multi-layer perceptron structure, names STH-NN and STV-NN. As a training result of neural network, the STH-NN and STV-NN achieved accuracies of 85% and 91%, respectively. Experimental results show that the proposed method reduces the encoding complexity up to 25% with unnoticeable coding loss compared to the VVC test model (VTM).

Direction-Oriented Fast Full Search Algorithm at the Divided Search Range (세분화된 탐색 범위에서의 방향 지향적 전영역 고속 탐색 알고리즘)

  • Lim, Dong-Young;Park, Sang-Jun;Jeong, Je-Chang
    • Journal of Broadcast Engineering
    • /
    • v.12 no.3
    • /
    • pp.278-288
    • /
    • 2007
  • We propose the fast full search algorithm that reduces the computational load of the block matching algorithm which is used for a motion estimation in the video coding. Since the conventional spiral search method starts searching at the center of the search window and then moves search point to estimate the motion vector pixel by pixel, it is good for the slow motion picture. However we proposed the efficient motion estimation method which is good for the fast and slow motion picture. Firstly, when finding the initial threshold value, we use the expanded predictor that can approximately calculate minimum threshold value. The proposed algorithm estimates the motion in the new search order after partitioning the search window and adapt the directional search order in the re-divided search window. At the result, we can check that the proposed algorithm reduces the computational load 94% in average compared to the conventional spiral full search algorithm without any loss of image quality.

Multiple Description Coding of H.264/AVC Motion Vector under Data Partitioning Structure and Decoding Using Multiple Description Matching (데이터 분할구조에서의 H.264/AVC 움직임 벡터의 다중표현 부호화와 다중표현 정합을 이용한 복호화)

  • Yang, Jung-Youp;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.100-110
    • /
    • 2007
  • When compressed video data is transmitted over error-prone network such as wireless channel, data is likely to be lost, so the quality of reconstructed picture is severely decreased. It is specially so in case that important information such as motion vector or macroblock mode is lost. H.264/AVC standard includes DP as error resilient technique for protecting important information from error in which data is labeled according to its relative importance. But DP technique requires a network that supports different reliabilities of transmitted data. In general, the benefits of UEP is sought by sending multiple times of same packets corresponding to important information. In this paper, we propose MDC technique based on data partitioning technique. The proposed method encodes motion vector of H.264/AVC standard into multiple parts using MDC and transmits each part as independent packet. Even if partial packet is lost, the proposed scheme can decode the compressed bitstream by using estimated motion vector with partial packets correctly transmitted, so that achieving improved performance of error concealment with minimal effect of channel error. Also in decoding process, the proposed multiple description matching increases the accuracy of estimated lost motion vector and quality of reconstructed video.

Multiple Description Coding using Whitening Ttansform

  • Park, Kwang-Pyo;Lee, Keun-Young
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1003-1006
    • /
    • 2002
  • In the communications systems with diversity, we are commonly faced on needing of new source coding technique, error resilient coding. The error resilient coding addresses the coding algorithm that has the robustness to unreliability of communications channel. In recent years, many error resilient coding techniques were proposed such as data partitioning, resynchronization, error detection, concealment, reference picture selection and multiple description coding (MDC). Especially, the MDC using correlating transform explicitly adds correlation between two descriptions to enable the estimation of one set from the other. However, in the conventional correlating transform method, there is a critical problem that decoder must know statistics of original image. In this paper, we propose an enhanced method, the MDC using whitening transform that is not necessary additional statistical information to decode image because the DCT coefficients to apply whitening transform to an image have uni-variance statistics. Our experimental results show that the proposed method achieves a good trade-off between the coding efficiency and the reconstruction quality. In the proposed method, the PSNR of images reconstructed from two descriptions is about 0.7dB higher than conventional method at the 1.0 BPP and from only one description is about 1,8dB higher at the same rate.

  • PDF

Image Coding Using DCT Map and Binary Tree-structured Vector Quantizer (DCT 맵과 이진 트리 구조 벡터 양자화기를 이용한 영상 부호화)

  • Jo, Seong-Hwan;Kim, Eung-Seong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.81-91
    • /
    • 1994
  • A DCT map and new cldebook design algorithm based on a two-dimension discrete cosine transform (2D-DCT) is presented for coder of image vector quantizer. We divide the image into smaller subblocks, then, using 2D DCT, separate it into blocks which are hard to code but it bears most of the visual information and easy to code but little visual information, and DCT map is made. According to this map, the significant features of training image are extracted by using the 2D DCT. A codebook is generated by partitioning the training set into a binary tree based on tree-structure. Each training vector at a nonterminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. Compared with the pairwise neighbor (PPN) and classified VQ(CVQ) algorithm, about 'Lenna' and 'Boat' image, the new algorithm results in a reduction in computation time and shows better picture quality with 0.45 dB and 0.33dB differences as to PNN, 0.05dB and 0.1dB differences as to CVQ respectively.

  • PDF