• Title/Summary/Keyword: compression coding

Search Result 828, Processing Time 0.029 seconds

Suboptimal video coding for machines method based on selective activation of in-loop filter

  • Ayoung Kim;Eun-Vin An;Soon-heung Jung;Hyon-Gon Choo;Jeongil Seo;Kwang-deok Seo
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.538-549
    • /
    • 2024
  • A conventional codec aims to increase the compression efficiency for transmission and storage while maintaining video quality. However, as the number of platforms using machine vision rapidly increases, a codec that increases the compression efficiency and maintains the accuracy of machine vision tasks must be devised. Hence, the Moving Picture Experts Group created a standardization process for video coding for machines (VCM) to reduce bitrates while maintaining the accuracy of machine vision tasks. In particular, in-loop filters have been developed for improving the subjective quality and machine vision task accuracy. However, the high computational complexity of in-loop filters limits the development of a high-performance VCM architecture. We analyze the effect of an in-loop filter on the VCM performance and propose a suboptimal VCM method based on the selective activation of in-loop filters. The proposed method reduces the computation time for video coding by approximately 5% when using the enhanced compression model and 2% when employing a Versatile Video Coding test model while maintaining the machine vision accuracy and compression efficiency of the VCM architecture.

Multi-Symbol Binary Arithmetic Coding Algorithm for Improving Throughput in Hardware Implementation

  • Kim, Jin-Sung;Kim, Eung Sup;Lee, Kyujoong
    • Journal of Multimedia Information System
    • /
    • v.5 no.4
    • /
    • pp.273-276
    • /
    • 2018
  • In video compression standards, the entropy coding is essential to the high performance compression because redundancy of data symbols is removed. Binary arithmetic coding is one of high performance entropy coding methods. However, the dependency between consecutive binary symbols prevents improving the throughput. For the throughput enhancement, a new probability model is proposed for encoding multi-symbols at one time. In the proposed method, multi-symbol encoder is implemented with only adders and shifters, and the multiplication table for interval subdivision of binary arithmetic coding is removed. Compared to the compression ratio of CABAC of H.264/AVC, the performance degradation on average is only 1.4% which is negligible.

MPEG-4 ALS - The Standard for Lossless Audio Coding

  • Liebchen, Tilman
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.618-629
    • /
    • 2009
  • The MPEG-4 Audio Lossless Coding (ALS) standard belongs to the family MPEG-4 audio coding standards. In contrast to lossy codecs such as AAC, which merely strive to preserve the subjective audio quality, lossless coding preserves every single bit of the original audio data. The ALS core codec is based on forward-adaptive linear prediction, which combines remarkable compression with low complexity. Additional features include long-term prediction, multichannel coding, and compression of floating-point audio material. This paper describes the basic elements of the ALS codec with a focus on prediction, entropy coding, and related tools and points out the most important applications of this standardized lossless audio format.

Design of A Multimedia Bitstream ASIP for Multiple CABAC Standards

  • Choi, Seung-Hyun;Lee, Seong-Won
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.4
    • /
    • pp.292-298
    • /
    • 2017
  • The complexity of image compression algorithms has increased in order to improve image compression efficiency. One way to resolve high computational complexity is parallel processing. However, entropy coding, which is lossless compression, does not fit into the parallel processing form because of the correlation between consecutive symbols. This paper proposes a new application-specific instruction set processor (ASIP) platform by adding new context-adaptive binary arithmetic coding (CABAC) instructions to the existing platform to quickly process a variety of entropy coding. The newly added instructions work without conflicts with all other existing instructions of the platform, providing the flexibility to handle many coding standards with fast processing speeds. CABAC software is implemented for High Efficiency Video Coding (HEVC) and the performance of the proposed ASIP platform was verified with a field programmable gate array simulation.

High Compression synthetic High Coding Using Edge Sharpening (에지 선명화에 의한 고압축 Synthetic High 부호화)

  • 정성환;김남철
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.9
    • /
    • pp.1410-1419
    • /
    • 1989
  • In this paper, we present a new synthetic high coding method which gives high image compression ratio. Given an image, only its low-pass component is transmitted by DCT coding` the high-pass component is not transmitted but synthesized using edge sharpening on the reconstructed low-pass image at the receiver. For the DCT coding which is used to encode the low-pass image, we used an improved version of Cox's variance estimator. Also, introduced are new image quality measures called GSNR and EPR which emphasize perceptual aspects of image quality. Experimental results show that the performance of the proposed synthetic high coding is better in various quality measures than that of Cox's adaptive transform coding. Also, it yields acceptable image quality with neither apparent block effect nor visible granular noise even at high compression ratio of about 30:1.

  • PDF

Embedded Video Compression Scheme using Wavelet Transform and 3-D Block Partition (Wavelet 변환과 3-D 블록분할을 이용하는 Embedded 비디오 부호화기)

  • Yang, Change-Mo;Lim, Tae-Beom;Lee, Seok-Pil
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.190-192
    • /
    • 2004
  • In this paper, we propose a low bit-rate embedded video compression scheme with 3-D block partition coding in the wavelet domain. The proposed video compression scheme includes multi-level 3-dimensional dyadic wavelet decomposition, raster scanning within each subband, formation of block, 3-D partitioning of block, and adaptive arithmetic entropy coding. Although the proposed video compression scheme is quit simple, it produces bit-stream with good features, including SNR scalability from the embedded nature. Experimental results demonstrate that the proposed video compression scheme is quit competitive to other good wavelet-based video coders in the literature.

  • PDF

A DATA COMPRESSION METHOD USING ADAPTIVE BINARY ARITHMETIC CODING AND FUZZY LOGIC

  • Jou, Jer-Min;Chen, Pei-Yin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.756-761
    • /
    • 1998
  • This paper describes an in-line lossless data compression method using adaptive binary arithmetic coding. To achieve better compression efficiency , we employ an adaptive fuzzy -tuning modeler, which uses fuzzy inference to deal with the problem of conditional probability estimation. The design is simple, fast and suitable for VLSI implementation because we adopt the table -look-up approach. As compared with the out-comes of other lossless coding schemes, our results are good and satisfactory for various types of source data.

  • PDF

An Implementation of efficient Image Compression JPEG2000 Based on DSPs (DSP를 이용한 JPEG2000 의 고효율 이미지 압축 구현)

  • 김흥선;조준기;황민철;남주훈;고성제
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2363-2366
    • /
    • 2003
  • With the increasing use of multimedia technologies, image compression requires higher performance as well as new features such as embedded Tossy to lossless coding, various progressive order, error resilience and region-of-interest coding. In the specific area of still image encoding, a new standard, the JPEG2000, has been currently developed. This paper presents a new compression scheme based on JPEG2000. In the proposed scheme, gray coding is applied to the wavelet coefficient. Since gray coding produces an image whose bit plane is will clustered. The proposed method improves compression efficiency of the JPEG2000.

  • PDF

Compressed Representation of Neural Networks for Use Cases of Video/Image Compression in MPEG-NNR

  • Moon, Hyeoncheol;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.133-134
    • /
    • 2018
  • MPEG-NNR (Compressed Representation of Neural Networks) aims to define a compressed and interoperable representation of trained neural networks. In this paper, a compressed representation of NN and its evaluation performance along with use cases of image/video compression in MPEG-NNR are presented. In the compression of NN, a CNN to replace the in-loop filter in VVC (Versatile Video Coding) intra coding is compressed by applying uniform quantization to reduce the trained weights, and the compressed CNN is evaluated in terms of compression ratio and coding efficiency compared to the original CNN. Evaluation results show that CNN could be compressed to about quarter with negligible coding loss by applying simple quantization to the trained weights.

  • PDF

An Image Coding Technique Using the Image Segmentation (영상 영역화를 이용한 영상 부호화 기법)

  • 정철호;이상욱;박래홍
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.24 no.5
    • /
    • pp.914-922
    • /
    • 1987
  • An image coding technique based on a segmentation, which utilizes a simplified description of regions composing an image, is investigated in this paper. The proposed coding technique consists of 3 stages: segmentation, contour coding. In this paper, emphasis was given to texture coding in order to improve a quality of an image. Split-and-merge method was employed for a segmentation. In the texture coding, a linear predictive coding(LPC), along with approximation technique based on a two-dimensional polynomial function was used to encode texture components. Depending on a size of region and a mean square error between an original and a reconstructed image, appropriate texture coding techniques were determined. A computer simulation on natural images indicates that an acceptable image quality at a compression ratio as high as 15-25 could be obtained. In comparison with a discrete cosine transform coding technique, which is the most typical coding technique in the first-generation coding, the proposed scheme leads to a better quality at compression ratio higher than 15-20.

  • PDF