• Title/Summary/Keyword: compression coding

Search Result 828, Processing Time 0.033 seconds

Design of Visual Quantizer for very low Bit-rate Coding on JPEG2000 (JPEG2000에서 저 전송 부호화를 위한 비주얼 양자화기 설계)

  • Kim, Dong-Hyeok;Jeon, Joon-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.69-78
    • /
    • 2010
  • The irreversible 9/7 JPEG2000, which is one of sub-band coding techniques, has a problem of severe picture quality distortion at the edge and the background caused by the quantization error below 0.15bpp. In this paper, to solve such problems we propose a VQ(Visual Quantizer) based on L-pdf(Laplace probability density function) statistical characteristics of high frequency sub-bands. The proposed VQ is designed by visual parameter for improving the subjective quality and weighting parameter for increasing the compression ratio. A proposed method, based on 9/7 JPEG2000 scheme, gives the high subjective quality to reconstructed images below 0.15bpp and provides minimum MSE(Mean-Squared Error) regardless of the compression ratio.

Huffman Code Design and PSIP Structure of Hangul Data for Digital Broadcasting (디지털 방송용 한글 허프만 부호 설계 및 PSIP 구조)

  • 황재정;진경식;한학수;최준영;이진환
    • Journal of Broadcast Engineering
    • /
    • v.6 no.1
    • /
    • pp.98-107
    • /
    • 2001
  • In this paper we derive an optimal Huffman code set with escape coding that miximizes coding efficiency for the Hangul text data. The Hangul code can be represented in the standard Wansung or Unicode format, and we can generate a set of Huffamn codes for both. The current Korean DT standard has not defined a Hangul compression algorithm which may be confronted with a serious data rate for the digital data broadcasting system Generation of the optimal Huffman code set is to solve the data transmission problem. A relevant PSIP structure for the DTB standard is also proposed As a result characters which have the probability of less than 0.0043 are escape coded, showing the optimum compression efficiency of 46%.

  • PDF

A Robust Sequential Preprocessing Scheme for Efficient Lossless Image Compression (영상의 효율적인 무손실 압축을 위한 강인한 순차적 전처리 기법)

  • Kim, Nam-Yee;You, Kang-Soo;Kwak, Hoon-Sung
    • Journal of Internet Computing and Services
    • /
    • v.10 no.1
    • /
    • pp.75-82
    • /
    • 2009
  • In this paper, we propose a robust preprocessing scheme for entropy coding in gray-level image. The issue of this paper is to reduce additional information needed when bit stream is transmitted. The proposed scheme uses the preprocessing method of co-occurrence count about gray-levels in neighboring pixels. That is, gray-levels are substituted by their ranked numbers without additional information. From the results of computer simulation, it is verified that the proposed scheme could be reduced the compression bit rate by up to 44.1%, 37.5% comparing to the entropy coding and conventional preprocessing scheme respectively. So our scheme can be successfully applied to the application areas that require of losslessness and data compaction.

  • PDF

Implementation of CAVLC Encoder for the Image Compression in H.264/AVC (H.264/AVC용 영상압축을 위한 CAVLC 인코더 구현)

  • Jung Duck Young;Choi Dug Young;Jo Chang-Seok;Sonh Seung Il
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1485-1490
    • /
    • 2005
  • Variable length code is an integral component of many international standards on image and video compression currently. Context-based Adaptive Variable Length Coding(CAVLC) is adopted by the emerging JVT(also called H.264, and AVC in MPEG-4). In this paper, we design an architecture for CAVLC encoder, including a coeff_token encoder, level encoder, total_zeros encoder and run_before encoder. The designed CAVLC encoder can encode one syntax element in one clock cycle. As a result of implementation by Vertex-1000e of Xilinx, its operation frequency is 68MHz. Therefore, it is very suitable for video applications that require high throughput.

Complexity Analysis of Internet Video Coding (IVC) Decoding

  • Park, Sang-hyo;Dong, Tianyu;Jang, Euee S.
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.179-188
    • /
    • 2017
  • The Internet Video Coding (IVC) standard is due to be published by Moving Picture Experts Group (MPEG) for various Internet applications such as internet broadcast streaming. IVC aims at three things fundamentally: 1) forming IVC patents under a free of charge license, 2) reaching comparable compression performance to AVC/H.264 constrained Baseline Profile (cBP), and 3) maintaining computational complexity for feasible implementation of real-time encoding and decoding. MPEG experts have worked diligently on the intellectual property rights issues for IVC, and they reported that IVC already achieved the second goal (compression performance) and even showed comparable performance to even AVC/H.264 High Profile (HP). For the complexity issue, however, there has not been thorough analysis on IVC decoder. In this paper, we analyze the IVC decoder in view of the time complexity by evaluating running time. Through the experimental results, IVC is 3.6 times and 3.1 times more complex than AVC/H.264 cBP under constrained set (CS) 1 and CS2, respectively. Compared to AVC/H.264 HP, IVC is 2.8 times and 2.9 times slower in decoding time under CS1 and CS2, respectively. The most critical tool to be improved for lightweight IVC decoder is motion compensation process containing a resolution-adaptive interpolation filtering process.

Very Low Rate Coding of Motion Video Using 3-D Segmentation with Two Change Detection Masks (두 변화검출 마스크를 이용한 3차원 영상분할 초저속 동영상 부호화)

  • Lee, Sang-Mi;Kim, Nam-Chul;Son, Hyon
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.27 no.10
    • /
    • pp.146-153
    • /
    • 1990
  • A new 3-D segmentation-based coding technique is proposed for transmitting the motion video with reasonablly acceptable quality even at a very low bit rate. Only meaningful motion areas are extracted by using two change detection masks and a current frame is directly segmented rather than a difference frame itself so that a good quality of image can be obtained at high compression ratios. Through the experiments, the sequence of Miss America is reconstructed with visually acceptable quality at the very high compression ratio of 360:1.

  • PDF

A study on improvement of SPIHT algorithm using redundancy bit removing for medical image (의료영상을 위한 중복비트 제거를 이용한 SPIHT 알고리즘의 개선에 관한 연구)

  • Park, Jae-Hong;Yang, Won-Seok;Park, Chul-Woo
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.6
    • /
    • pp.329-334
    • /
    • 2011
  • This paper presents improvement of compression rate for SPIHT algorithm based on wavelet compression through redundancy bit removing. Proposed SPIHT algorithm uses a method to select of optimized threshold from feature of wavelet transform coefficients and removes sign bit only if coefficient is LL area. Finally Proposed SPIHT algorithm applies to Huffman coding. Experimental results show that the proposed algorithm achieves more improvement bit rate and more fast progressive transmission with low bit rate.

Reduction of Test Data and Power in Scan Testing for Digital Circuits using the Code-based Technique (코드 기반 기법을 이용한 디지털 회로의 스캔 테스트 데이터와 전력단축)

  • Hur, Yong-Min;Shin, Jae-Heung
    • 전자공학회논문지 IE
    • /
    • v.45 no.3
    • /
    • pp.5-12
    • /
    • 2008
  • We propose efficient scan testing method capable of reducing the test data and power dissipation for digital logic circuits. The proposed testing method is based on a hybrid run-length encoding which reduces test data storage on the tester. We also introduce modified Bus-invert coding method and scan cell design in scan cell reordering, thus providing increased power saving in scan in operation. Experimental results for ISCAS'89 benchmark circuits show that average power of 96.7% and peak power of 84% are reduced on the average without fault coverage degrading. We have obtained a high reduction of 78.2% on the test data compared the existing compression methods.

Experiment on Intermediate Feature Coding for Object Detection and Segmentation

  • Jeong, Min Hyuk;Jin, Hoe-Yong;Kim, Sang-Kyun;Lee, Heekyung;Choo, Hyon-Gon;Lim, Hanshin;Seo, Jeongil
    • Journal of Broadcast Engineering
    • /
    • v.25 no.7
    • /
    • pp.1081-1094
    • /
    • 2020
  • With the recent development of deep learning, most computer vision-related tasks are being solved with deep learning-based network technologies such as CNN and RNN. Computer vision tasks such as object detection or object segmentation use intermediate features extracted from the same backbone such as Resnet or FPN for training and inference for object detection and segmentation. In this paper, an experiment was conducted to find out the compression efficiency and the effect of encoding on task inference performance when the features extracted in the intermediate stage of CNN are encoded. The feature map that combines the features of 256 channels into one image and the original image were encoded in HEVC to compare and analyze the inference performance for object detection and segmentation. Since the intermediate feature map encodes the five levels of feature maps (P2 to P6), the image size and resolution are increased compared to the original image. However, when the degree of compression is weakened, the use of feature maps yields similar or better inference results to the inference performance of the original image.

Manchester coding of compressed binary clusters for reducing IoT healthcare device's digital data transfer time (IoT기반 헬스케어 의료기기의 디지털 데이터 전송시간 감소를 위한 압축 바이너리 클러스터의 맨체스터 코딩 전송)

  • Kim, Jung-Hoon
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.6
    • /
    • pp.460-469
    • /
    • 2015
  • This study's aim is for reducing big data transfer time of IoT healthcare devices by modulating digital bits into Manchester code including zero-voltage idle as information for secondary compressed binary cluster's compartment after two step compression of compressing binary data into primary and secondary binary compressed clusters for each binary clusters having compression benefit of 1 bit or 2 bits. Also this study proposed that as department information of compressed binary clusters, inserting idle signal into Manchester code will have benefit of reducing transfer time in case of compressing binary cluster into secondary compressed binary cluster by 2 bits, because in spite of cost of 1 clock idle, another 1 bit benefit can play a role of reducing 1 clock transfer time. Idle signal is also never consecutive because the signal is for compartment information between two adjacent secondary compressed binary cluster. Voltage transition on basic rule of Manchester code is remaining while inserting idle signal, so DC balance can be guaranteed. This study's simulation result said that even compressed binary data by another compression algorithms could be transferred faster by as much as about 12.6 percents if using this method.