• Title/Summary/Keyword: compression coding

Search Result 828, Processing Time 0.03 seconds

CPU Parallel Processing and GPU-accelerated Processing of UHD Video Sequence using HEVC (HEVC를 이용한 UHD 영상의 CPU 병렬처리 및 GPU가속처리)

  • Hong, Sung-Wook;Lee, Yung-Lyul
    • Journal of Broadcast Engineering
    • /
    • v.18 no.6
    • /
    • pp.816-822
    • /
    • 2013
  • The latest video coding standard HEVC was developed by the joint work of JCT-VC(Joint Collaborative Team on Video Coding) from ITU-T VCEG and ISO/IEC MPEG. The HEVC standard reduces the BD-Bitrate of about 50% compared with the H.264/AVC standard. However, using the various methods for obtaining the coding gains has increased complexity problems. The proposed method reduces the complexity of HEVC by using both CPU parallel processing and GPU-accelerated processing. The experiment result for UHD($3840{\times}2144$) video sequences achieves 15fps encoding/decoding performance by applying the proposed method. Sooner or later, we expect that the H/W speedup of data transfer rates between CPU and GPU will result in reducing the encoding/decoding times much more.

A new transform coding for contours in object-based image compression (객체지향 영상압축에 있어서 윤곽선에 대한 새로운 변환 부호화)

  • 민병석;정제창;최병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.4
    • /
    • pp.1087-1099
    • /
    • 1998
  • In the content-based image coding, where each object in the scene is encoded independently, the shape, texture and motion information are very important factors. Though the contours representing the shape of an object occupy a great amount of data in proportion to the whole information, they strongly affect the subjective image quaility. Therefore, the distortion of contour coding has to be minimized as much as possible. In this paper, we propose a new method for the contour coding in which the contours are approximated to polygon and the eorror signal occurring from polygonal approximation are transformed with new basis functions. Considering the facts that confour segments occurring from polygonal approximation are smooth curves and error signals have two zero-ending points, we design new basis functions based on the Legendre polynomial and then transform the error signals with them. When applied to synthetic images such as circles, ellipses and etc., the proposed method provides, in overall, outstanding results in respect to the transform coding gain compared with DCT and DST. And in the case when applied to natural images, the proposed method gives better image quality over DCT and comparable results with DST.

  • PDF

Tree structured wavelet transform coding scheme for digital HD-VCR (웨이브렛 변환계수의 트리구졸르 이용한 방송용 HD-VCR의 부호화 기법)

  • 김용규;정현민;이병래;강현철
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.8
    • /
    • pp.1790-1802
    • /
    • 1997
  • A wavelet transform coding method that fulfills the requirements of HD-VCR(high definition video casstte recorder) for studio applications in proposed. A constant bit rate is achieved by a forward rate control technique whcih determins the quantizer stepsize based on the coding results fo the previous frame. We also propose a two-level coder that consists of both the IDC(independently decodable code) and the DDC(dependently decodable code). To minimize error propagation, the transformed coefficients are restructured into transform blocks which are represented by a tree structure. The result shows thta the proposed coding scheme produces better picture quality with block effects than that of DCT(discrete cosine transform) based coding schemes at the same compression ratio. The proposed method meets most of the requirements of HD-VCR.

  • PDF

Image Coding Using LOT and FSVQ with Two-Channel Conjugate Codebooks (LOT와 2-채널 결합 코드북을 갖은 FSVQ를 이용한 영상 부호화)

  • 채종길;황찬식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.4
    • /
    • pp.772-780
    • /
    • 1994
  • Vector quantization with two-channel conjugate codebook has been researched as an efficient coding technique that can reduce the computational complexity and codebook storage. This paper proposes FSVQ using two-channel conjugate codebook in order to reduce the number of state codebooks. Input vector in the two-channel conjugate FSVQ is coded with state codebook of a seperated state according to each codebook. In addition, LOT is adopted to obtain to obtain a high coding gain and to reduce blocking effect which appears in the block coding. As a result, although FSVQ can achieve higher data compression ratio than general vector quantization, it has a disadvantage of having a very large number of state codebooks. However FSVQ with two-channel conjugate codebooks can employ a significantly reduced number of state codebooks, even though it has a small loss in the PSNR compared with the conventional FSVQ using one codebook. Moreover FSVQ in the LOT domain can reduce blocking effect and high coding gain compared with FSVQ in the spatial domain.

  • PDF

Low-Complexity H.264/AVC Deblocking Filter based on Variable Block Sizes (가변블록 기반 저복잡도 H.264/AVC 디블록킹 필터)

  • Shin, Seung-Ho;Doh, Nam-Keum;Kim, Tae-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.4
    • /
    • pp.41-49
    • /
    • 2008
  • H.264/AVC supports variable block motion compensation, multiple reference images, 1/4-pixel motion vector accuracy, and in-loop deblocking filter, compared with the existing compression technologies. While these coding technologies are major functions of compression rate improvement, they lead to high complexity at the same time. For the H.264 video coding technology to be actually applied on low-end / low-bit rates terminals more extensively, it is essential to improve tile coding speed. Currently the deblocking filter that can improve the moving picture's subjective image quality to a certain degree is used on low-end terminals to a limited extent due to computational complexity. In this paper, a performance improvement method of the deblocking filter that efficiently reduces the blocking artifacts occurred during the compression of low-bit rates digital motion pictures is suggested. In the method proposed in this paper, the image's spatial correlational characteristics are extracted by using the variable block information of motion compensation; the filtering is divided into 4 modes according to the characteristics, and adaptive filtering is executed in the divided regions. The proposed deblocking method reduces the blocking artifacts, prevents excessive blurring effects, and improves the performance about $30{\sim}40%$ compared with the existing method.

A binary adaptive arithmetic coding algorithm based on adaptive symbol changes for lossless medical image compression (무손실 의료 영상 압축을 위한 적응적 심볼 교환에 기반을 둔 이진 적응 산술 부호화 방법)

  • 지창우;박성한
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.12
    • /
    • pp.2714-2726
    • /
    • 1997
  • In this paper, adaptive symbol changes-based medical image compression method is presented. First, the differenctial image domain is obtained using the differentiation rules or obaptive predictors applied to original mdeical image. Also, the algorithm determines the context associated with the differential image from the domain. Then prediction symbols which are thought tobe the most probable differential image values are maintained at a high value through the adaptive symbol changes procedure based on estimates of the symbols with polarity coincidence between the differential image values to be coded under to context and differential image values in the model template. At the coding step, the differential image values are encoded as "predicted" or "non-predicted" by the binary adaptive arithmetic encoder, where a binary decision tree is employed. The simlation results indicate that the prediction hit ratios of differential image values using the proposed algorithm improve the coding gain by 25% and 23% than arithmetic coder with ISO JPEG lossless predictor and arithmetic coder with differentiation rules or adaptive predictors, respectively. It can be used in compression part of medical PACS because the proposed method allows the encoder be directly applied to the full bit-planes medical image without a decomposition of the full bit-plane into a series of binary bit-planes as well as lower complexity of encoder through using an additions when sub-dividing recursively unit intervals.

  • PDF

Multiple Region of Interest Coding using Maxshift Method

  • Lee Han Jeong;You Kang Soo;Jang Yoon Up;Seo Duck Won;Yoo Gi Hyoung;Kwak Hoon Sung
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.853-856
    • /
    • 2004
  • Image data processing on the region of interest (ROI) for providing the primary information is needed the view of saving search time and bandwidth over image communications related to web browsing, image database, and telemedicine, etc. Hence, the issue on extracting the region of interest is drawing a plenty of attention for the communication environment with a relatively low bandwidth such as mobile internet. In this paper, we propose a improved standard Maxshift method. The proposed algorithm compress image that includes multiple ROI using Maxshift method in Part 1 of JPEG2000. Simulation results show that proposed method increases PSNR vs. compression ratio performance above the Maxshift method.

  • PDF

On Speech Digitization and Bandwidth Compression Techniques[II]-Vocoding (음성신호의 디지탈화와 대역폭축소의 방법에 관하여[II]-Vocoding)

  • 은종관
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.15 no.6
    • /
    • pp.1-7
    • /
    • 1978
  • This paper deals with speech digitization and bandwidth compression techniques, particularly two predictive coding methods-namely, adaptive differential pulse code modulation(ADPCM) and adaptive delta modulation(ADM). The principle of a typical adaptive quantizer that is used in ADPCM is explained, and discussed. Also, three companding methods(instantaueous, syllabic, and hybrid companding) that are used in ADM are explained in detail, and their performances are compared. In addition, the performances of ADPCM and ADM as speech coders are compared, and the inerits of each coder are discussed.

  • PDF

A study on the Perceptual Model for MPEG II AAC Encoder (MPEG-II AAC Encoder의 perceptual Model에 관한 연구)

  • 구대성;김정태;이강현
    • Proceedings of the IEEK Conference
    • /
    • 2000.06c
    • /
    • pp.93-96
    • /
    • 2000
  • Currently, the most important technology is the compression methods in the multimedia society. Audio files are rapidly propagated through internet. MP-3 is offered to CD tone quality in 128Kbps, but 64Kbps below tone quality is abruptly down and high bitrate. on the other hand, MPEG-II AAC (Advanced Audio Coding) is not compatible with MPEG-I, but AAC has a high compression ratio 1.4 better than MP-3. Especially, AAC has max. 7.1 channel and 96KHz sampling rate. In this paper, the perceptual model is dealt with 44.1KHz sampling rate for SMR(Signal to Masking Ratio)

  • PDF

Efficient Multi-way Tree Search Algorithm for Huffman Decoder

  • Cha, Hyungtai;Woo, Kwanghee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.34-39
    • /
    • 2004
  • Huffman coding which has been used in many data compression algorithms is a popular data compression technique used to reduce statistical redundancy of a signal. It has been proposed that the Huffman algorithm can decode efficiently using characteristics of the Huffman tables and patterns of the Huffman codeword. We propose a new Huffman decoding algorithm which used a multi way tree search and present an efficient hardware implementation method. This algorithm has a small logic area and memory space and is optimized for high speed decoding. The proposed Huffman decoding algorithm can be applied for many multimedia systems such as MPEG audio decoder.