• Title/Summary/Keyword: codeword length

Search Result 40, Processing Time 0.021 seconds

${L_{1:1}}^\beta$(t) IN TERMS OF A GENERALIZED MEASURE OF ENTROPY

  • Hooda, D.S.;Ram, Anant
    • Journal of applied mathematics & informatics
    • /
    • v.5 no.1
    • /
    • pp.201-212
    • /
    • 1998
  • In the present paper we define the codes which assign D-alphabet one-one codeword to each outcome of a random variable and the functions which represent possible transormations from one-one codes of size D to suitable codes. By using these functions we obtain lower bounds on the exponentiated mean codeword length for one-one codes in terms of the generalized entropy of order $\alpha$ and type $\beta$ and study the particular cases also.

A CODING THEOREM ON GENERALIZED R-NORM ENTROPY

  • Hooda, D.S.
    • Journal of applied mathematics & informatics
    • /
    • v.8 no.3
    • /
    • pp.881-888
    • /
    • 2001
  • Recently, Hooda and Ram [7] have proposed and characterized a new generalized measure of R-norm entropy. In the present communication we have studied its application in coding theory. Various mean codeword lengths and their bounds have been defined and a coding theorem on lower and upper bounds of a generalized mean codeword length in term of the generalized R-norm entropy has been proved.

Data compression algorithm with two-byte codeword representation (2바이트 코드워드 표현방법에 의한 자료압축 알고리듬)

  • 양영일;김도현
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.3
    • /
    • pp.23-36
    • /
    • 1997
  • In tis paper, sthe new data model for the hardware implementation of lempel-ziv compression algorithm was proposed. Traditional model generates the codeword which consists of 3 bytes, the last symbol, the position and the matched length. MSB (most significant bit) of the last symbol is the comparession flag and the remaining seven bits represent the character. We confined the value of the matched length to 128 instead of 256, which can be coded with seven bits only. In the proposed model, the codeword consists of 2 bytes, the merged symbol and the position. MSB of the merged symbol is the comression flag. The remaining seven bits represent the character or the matched length according to the value of the compression flag. The proposed model reduces the compression ratio by 5% compared with the traditional model. The proposed model can be adopted to the existing hardware architectures. The incremental factors of the compression ratio are also analyzed in this paper.

  • PDF

Multi-symbol Accessing Huffman Decoding Method for MPEG-2 AAC

  • Lee, Eun-Seo;Lee, Kyoung-Cheol;Son, Kyou-Jung;Moon, Seong-Pil;Chang, Tae-Gyu
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.4
    • /
    • pp.1411-1417
    • /
    • 2014
  • An MPEG-2 AAC Huffman decoding method based on the fixed length compacted codeword tables, where each codeword can contain multiple number of Huffman codes, was proposed. The proposed method enhances the searching efficiency by finding multiple symbols in a single search, i.e., a direct memory reading of the compacted codeword table. The memory usage is significantly saved by separately handling the Huffman codes that exceed the length of the compacted codewords. The trade-off relation between the computational complexity and the amount of memory usage was analytically derived to find the proper codeword length of the compacted codewords for the design of MPEG-2 AAC decoder. To validate the proposed algorithm, its performance was experimentally evaluated with an implemented MPEG-2 AAC decoder. The results showed that the computational complexity of the proposed method is reduced to 54% of that of the most up-to-date method.

Automatic Music Summarization Using Similarity Measure Based on Multi-Level Vector Quantization (다중레벨 벡터양자화 기반의 유사도를 이용한 자동 음악요약)

  • Kim, Sung-Tak;Kim, Sang-Ho;Kim, Hoi-Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.39-43
    • /
    • 2007
  • Music summarization refers to a technique which automatically extracts the most important and representative segments in music content. In this paper, we propose and evaluate a technique which provides the repeated part in music content as music summary. For extracting a repeated segment in music content, the proposed algorithm uses the weighted sum of similarity measures based on multi-level vector quantization for fixed-length summary or optimal-length summary. For similarity measures, count-based similarity measure and distance-based similarity measure are proposed. The number of the same codeword and the Mahalanobis distance of features which have same codeword at the same position in segments are used for count-based and distance-based similarity measure, respectively. Fixed-length music summary is evaluated by measuring the overlapping ratio between hand-made repeated parts and automatically generated ones. Optimal-length music summary is evaluated by calculating how much automatically generated music summary includes repeated parts of the music content. From experiments we observed that optimal-length summary could capture the repeated parts in music content more effectively in terms of summary length than fixed-length summary.

Design of Reversible Variable-Length Codes Using Properties of the Huffman Code and the Average Length Function (Huffman 부호와 평균부호길이 함수의 특성을 이용한 양방향 가변길이 부호의 생성 방법)

  • 정욱현;윤영석;호요성
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.137-140
    • /
    • 2003
  • In this paper, we propose a new construction algorithm for the reversible variable-length code (RVLC) using a simplified average length function of the optimal Huffman code. RVLC is introduced as one of the error resilience tools in H.263+ and MPEG-4 owing to its error-correcting capability. The proposed algorithm demonstrates an improved performance in terms of the average codeword length over the existing HVLC algorithms.

  • PDF

Data Compression Capable of Error Control Using Block-sorting and VF Arithmetic Code (블럭정렬과 VF형 산술부호에 의한 오류제어 기능을 갖는 데이터 압축)

  • Lee, Jin-Ho;Cho, Suk-Hee;Park, Ji-Hwan;Kang, Byong-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.5
    • /
    • pp.677-690
    • /
    • 1995
  • In this paper, we propose the high efficiency data compression capable of error control using block-sorting, move to front(MTF) and arithmetic code with variable length in to fixed out. First, the substring with is parsed into length N is shifted one by one symbol. The cyclic shifted rows are sorted in lexicographical order. Second, the MTF technique is applied to get the reference of locality in the sorted substring. Then the preprocessed sequence is coded using VF(variable to fixed) arithmetic code which can be limited the error propagation in one codeword. The key point is how to split the fixed length codeword in proportion to symbol probabilities in VF arithmetic code. We develop the new VF arithmetic coding that split completely the codeword set for arbitrary source alphabet. In addition to, an extended representation for symbol probability is designed by using recursive Gray conversion. The performance of proposed method is compared with other well-known source coding methods with respect to entropy, compression ratio and coding times.

  • PDF

Lossy Source Compression of Non-Uniform Binary Source via Reinforced Belief Propagation over GQ-LDGM Codes

  • Zheng, Jianping;Bai, Baoming;Li, Ying
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.972-975
    • /
    • 2010
  • In this letter, we consider the lossy coding of a non-uniform binary source based on GF(q)-quantized low-density generator matrix (LDGM) codes with check degree $d_c$=2. By quantizing the GF(q) LDGM codeword, a non-uniform binary codeword can be obtained, which is suitable for direct quantization of the non-uniform binary source. Encoding is performed by reinforced belief propagation, a variant of belief propagation. Simulation results show that the performance of our method is quite close to the theoretic rate-distortion bounds. For example, when the GF(16)-LDGM code with a rate of 0.4 and block-length of 1,500 is used to compress the non-uniform binary source with probability of 1 being 0.23, the distortion is 0.091, which is very close to the optimal theoretical value of 0.074.

Recovering from Bit Errors in Scalar-Quantized Discrete Wavelet (양자화된 이산 웨이블릿 변환 영상에서의 비트 에러 복원)

  • 최승규;이득재;장은영;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.594-597
    • /
    • 2002
  • In this paper we study the effects of transmission noise on fixed-length coded wavelet coefficients. We use a posteriori detectors which include inter-bitplane information and determine which transmitted codeword was most likely corrupted into a received erroneous codeword We present a simple method of recovering from these errors once detected and demonstrate our restoration methodology on scalar-quantized wavelet coefficients that have been transmitted across a binary symmetric channel.

  • PDF