• Title/Summary/Keyword: Block-based entropy

Search Result 51, Processing Time 0.034 seconds

Complexity Balancing for Distributed Video Coding Based on Entropy Coding (엔트로피 코딩 기반의 분산 비디오 코딩을 위한 블록 기반 복잡도 분배)

  • Yoo, Sung-Eun;Min, Kyung-Yeon;Sim, Dong-Gyu
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.133-143
    • /
    • 2011
  • In this paper, a complexity-balancing algorithm is proposed for distributed video coding based on entropy coding. In order to reduce complexity of DVC-based decoders, the proposed method employs an entropy coder instead of channel coders and the complexity-balancing method is designed to improve RD performance with minimal computational complexity. The proposed method performs motion estimation in the decoder side and transmits the estimated motion vectors to the encoder. The proposed encoder can perform more accurate refinement using the transmitted motion vectors from the decoder. During the motion refinement, the optimal predicted motion vectors are decided by the received motion vector and the predicted motion vectors and complexity load of block is allocated by adjusting the search range based on the difference between the received motion vector and the predicted motion vectors. The computational complexity of the proposed encoder is decreased 11.9% compared to the H.264/AVC encoder and that of the proposed decoder are reduced 99% compared to the conventional DVC decoder.

Design of video encoder using Multi-dimensional DCT (다차원 DCT를 이용한 비디오 부호화기 설계)

  • Jeon, S.Y.;Choi, W.J.;Oh, S.J.;Jeong, S.Y.;Choi, J.S.;Moon, K.A.;Hong, J.W.;Ahn, C.B.
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.732-743
    • /
    • 2008
  • In H.264/AVC, 4$\times$4 block transform is used for intra and inter prediction instead of 8$\times$8 block transform. Using small block size coding, H.264/AVC obtains high temporal prediction efficiency, however, it has limitation in utilizing spatial redundancy. Motivated on these points, we propose a multi-dimensional transform which achieves both the accuracy of temporal prediction as well as effective use of spatial redundancy. From preliminary experiments, the proposed multi-dimensional transform achieves higher energy compaction than 2-D DCT used in H.264. We designed an integer-based transform and quantization coder for multi-dimensional coder. Moreover, several additional methods for multi-dimensional coder are proposed, which are cube forming, scan order, mode decision and updating parameters. The Context-based Adaptive Variable-Length Coding (CAVLC) used in H.264 was employed for the entropy coder. Simulation results show that the performance of the multi-dimensional codec appears similar to that of H.264 in lower bit rates although the rate-distortion curves of the multi-dimensional DCT measured by entropy and the number of non-zero coefficients show remarkably higher performance than those of H.264/AVC. This implies that more efficient entropy coder optimized to the statistics of multi-dimensional DCT coefficients and rate-distortion operation are needed to take full advantage of the multi-dimensional DCT. There remains many issues and future works about multi-dimensional coder to improve coding efficiency over H.264/AVC.

Design of a Lossless Audio Coding Using Cholesky Decomposition and Golomb-Rice Coding (콜레스키 분해와 골롬-라이스 부호화를 이용한 무손실 오디오 부호화기 설계)

  • Cheong, Cheon-Dae;Shin, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1480-1490
    • /
    • 2008
  • Design of a linear predictor and matching of an entropy coder is the art of lossless audio coding. In this paper, we use the covariance method and the Choleskey decomposition for calculating linear prediction coefficients instead of the autocorreation method and the Levinson-Durbin recursion. These results are compared to the polynomial predictor. Both of them, the predictor which has small prediction error is selected. For the entropy coding, we use the Golomb-Rice coder using the block-based parameter estimation method and the sequential adaptation method with LOCO-land RLGR. The proposed predictor and the block-based parameter estimation have $2.2879%{\sim}0.3413%$ improved compression ratios compared to FLAC lossless audio coder which use the autocorrelation method and the Levinson-Durbin recursion. The proposed predictor and the LOCO-I adaptation method could improved by $2.2879%{\sim}0.3413%$. But the proposed predictor and the RLGR adaptation method got better results with specific signals.

  • PDF

Motion Estimation Using the Relation Between Rate and Distortion (부호화율과 일그러짐의 관계를 이용하는 움직임 추정)

  • 양경호;김태정;이충웅
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.8
    • /
    • pp.66-73
    • /
    • 1992
  • This paper proposes a new motion estimation algorithm which takes into account the rate-distortion relation in encoding motion compensated error images. The proposed algorithm is based on a new block-matching criterion which is the function of not only the mean squared block-matching error but also the code length for the entropy coded motion vector. The proposed algorithm optimizes the trade-off between the bit rate for motion compensated error images and the bit rate for the motion vectors. Simulation results show that in the motion compensated image coding the proposed motion estimator improves the overall performance by 0.5 dB when compared to the motion estimator which uses MSE only.

  • PDF

An Efficient VLC Table Prediction Scheme for H.264 Using Weighting Multiple Reference Blocks (H.264 표준에서 가중된 다중 참조 블록을 이용한 효율적인 VLC 표 예측 방법)

  • Heo, Jin;Oh, Kwan-Jung;Ho, Yo-Sung
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.39-42
    • /
    • 2005
  • H.264, a recently proposed international video coding standard, has adopted context-based adaptive variable length coding (CAVLC) as the entropy coding tool in the baseline profile. By combining an adaptive variable length coding technique with context modeling, we can achieve a high degree of redundancy reduction. However, CAVLC in H.264 has weakness that the correct prediction rate of the variable length coding (VLC) table is low in a complex area, such as the boundary of an object. In this paper, we propose a VLC table prediction scheme considering multiple reference blocks; the same position block of the previous frame and the neighboring blocks of the current frame. The proposed algorithm obtains the new weighting values considering correctness of the VLC table for each reference block. Using this method, we can enhance the prediction rate of the VLC table and reduce the bit-rate.

  • PDF

Hiding Secret Data in an Image Using Codeword Imitation

  • Wang, Zhi-Hui;Chang, Chin-Chen;Tsai, Pei-Yu
    • Journal of Information Processing Systems
    • /
    • v.6 no.4
    • /
    • pp.435-452
    • /
    • 2010
  • This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.

Image Deblocking Scheme for JPEG Compressed Images Using an Adaptive-Weighted Bilateral Filter

  • Wang, Liping;Wang, Chengyou;Huang, Wei;Zhou, Xiao
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.631-643
    • /
    • 2016
  • Due to the block-based discrete cosine transform (BDCT), JPEG compressed images usually exhibit blocking artifacts. When the bit rates are very low, blocking artifacts will seriously affect the image's visual quality. A bilateral filter has the features for edge-preserving when it smooths images, so we propose an adaptive-weighted bilateral filter based on the features. In this paper, an image-deblocking scheme using this kind of adaptive-weighted bilateral filter is proposed to remove and reduce blocking artifacts. Two parameters of the proposed adaptive-weighted bilateral filter are adaptive-weighted so that it can avoid over-blurring unsmooth regions while eliminating blocking artifacts in smooth regions. This is achieved in two aspects: by using local entropy to control the level of filtering of each single pixel point within the image, and by using an improved blind image quality assessment (BIQA) to control the strength of filtering different images whose blocking artifacts are different. It is proved by our experimental results that our proposed image-deblocking scheme provides good performance on eliminating blocking artifacts and can avoid the over-blurring of unsmooth regions.

Lossless Frame Memory Compression with Low Complexity based on Block-Buffer Structure for Efficient High Resolution Video Processing (고해상도 영상의 효과적인 처리를 위한 블록 버퍼 기반의 저 복잡도 무손실 프레임 메모리 압축 방법)

  • Kim, Jongho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.20-25
    • /
    • 2016
  • This study addresses a low complexity and lossless frame memory compression algorithm based on block-buffer structure for efficient high resolution video processing. Our study utilizes the block-based MHT (modified Hadamard transform) for spatial decorrelation and AGR (adaptive Golomb-Rice) coding as an entropy encoding stage to achieve lossless image compression with low complexity and efficient hardware implementation. The MHT contains only adders and 1-bit shift operators. As a result of AGR not requiring additional memory space and memory access operations, AGR is effective for low complexity development. Comprehensive experiments and computational complexity analysis demonstrate that the proposed algorithm accomplishes superior compression performance relative to existing methods, and can be applied to hardware devices without image quality degradation as well as negligible modification of the existing codec structure. Moreover, the proposed method does not require the memory access operation, and thus it can reduce costs for hardware implementation and can be useful for processing high resolution video over Full HD.

Randomized Block Size (RBS) Model for Secure Data Storage in Distributed Server

  • Sinha, Keshav;Paul, Partha;Amritanjali, Amritanjali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4508-4530
    • /
    • 2021
  • Today distributed data storage service are being widely used. However lack of proper means of security makes the user data vulnerable. In this work, we propose a Randomized Block Size (RBS) model for secure data storage in distributed environments. The model work with multifold block sizes encrypted with the Chinese Remainder Theorem-based RSA (C-RSA) technique for end-to-end security of multimedia data. The proposed RBS model has a key generation phase (KGP) for constructing asymmetric keys, and a rand generation phase (RGP) for applying optimal asymmetric encryption padding (OAEP) to the original message. The experimental results obtained with text and image files show that the post encryption file size is not much affected, and data is efficiently encrypted while storing at the distributed storage server (DSS). The parameters such as ciphertext size, encryption time, and throughput have been considered for performance evaluation, whereas statistical analysis like similarity measurement, correlation coefficient, histogram, and entropy analysis uses to check image pixels deviation. The number of pixels change rate (NPCR) and unified averaged changed intensity (UACI) were used to check the strength of the proposed encryption technique. The proposed model is robust with high resilience against eavesdropping, insider attack, and chosen-plaintext attack.

The Research of Efficient Context Coding Method for compression of High-resolution image in JPEG 2000 (고해상도 정지영상 압축을 위한 효율적인 JPEG2000용 Context 추출부의 연산 방법 연구)

  • Lee, Sung-Mok;Song, Jin-Gun;Ha, Joo-Young;Lee, Min-Woo;Kang, Bong-Soon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.97-100
    • /
    • 2007
  • In order to overcome many defects in the current JPEG standard of still image compression, the new JPEG2000 standard has been development. The JPEG2000 standard is based on the principles of DWT and EBCOT Entropy Coding. EBCOT(Embedded block coding with optimized truncation) is the most important technology in the latest image-coding standard, JPEG2000. However, EBCOT occupies the highest computation time to operate bit-level processing. Therefore, many researches have achieved methods to minimize computation speed of EBCOT. Thus, this paper proposes the method of context-extraction that improves computational architecture. This paper proposes efficient context coding method. The proposed algorithm would apply to hard-wired JPEG2000 Encoder that is used for compression of high resolution image.

  • PDF