• Title/Summary/Keyword: Huffman code

Search Result 37, Processing Time 0.023 seconds

An Image Data Compression Algorithm by Means of Separating Edge Image and Non-Edge Image (윤곽선화상과 배경화상을 분리 처리하는 화상데이타 압축기법)

  • 최중한;김해수;조승환;이근영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.2
    • /
    • pp.162-171
    • /
    • 1991
  • This paper presents an algorithm for compressing image data by separating the image into two parts. I.e. edge image containing high-frequency components and non-edge image containing low-frequency components of image. The edge image is extracted by using 8 level compass gradient masks and the non-edge image is obtained by removing the edge image from the original image. The edge image is coded by Huffman run-length code and the non edge image is transformed first by DCT and the transformed images is coded next by a quantized bit allocation table. For an example image. GIRL. the proposed algorithm shows bit rate of 0.52 bpp with PSNR of 36dB.

  • PDF

Design of the ICMEP Algorithm for the Highly Efficient Entropy Encoding (고효율 엔트로피 부호화를 위한 ICMEP 알고리즘 설계)

  • 이선근;임순자;김환용
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.4
    • /
    • pp.75-82
    • /
    • 2004
  • The channel transmission ratio is speeded up by the combination of the Huffman algorithm, the model scheme of the lossy transform having minimum average code lengths for the image information and good instantaneous decoding capability, with the Lempel-Ziv algorithm showing the fast processing performance during the compression process. In order to increase the processing speed during the compression process, ICMEP algorithm is proposed and the entropy encoder of HDTV is designed and inspected. The ICMEP entropy encoder have been designed by choosing the top-down method and consisted of the source codes and the test benches by the behavior expression with VHDL. As a simulation results, implemented ICMEP entropy encoder confirmed that whole system efficiency by memory saturation prevention and compressibility increase improves.

Characterization of New Two Parametric Generalized Useful Information Measure

  • Bhat, Ashiq Hussain;Baig, M. A. K.
    • Journal of Information Science Theory and Practice
    • /
    • v.4 no.4
    • /
    • pp.64-74
    • /
    • 2016
  • In this paper we define a two parametric new generalized useful average code-word length $L_{\alpha}^{\beta}$(P;U) and its relationship with two parametric new generalized useful information measure $H_{\alpha}^{\beta}$(P;U) has been discussed. The lower and upper bound of $L_{\alpha}^{\beta}$(P;U), in terms of $H_{\alpha}^{\beta}$(P;U) are derived for a discrete noiseless channel. The measures defined in this communication are not only new but some well known measures are the particular cases of our proposed measures that already exist in the literature of useful information theory. The noiseless coding theorems for discrete channel proved in this paper are verified by considering Huffman and Shannon-Fano coding schemes on taking empirical data. Also we study the monotonic behavior of $H_{\alpha}^{\beta}$(P;U) with respect to parameters ${\alpha}$ and ${\beta}$. The important properties of $H_{{\alpha}}^{{\beta}}$(P;U) have also been studied.

An image sequence coding using motion-compensated transform technique based on the sub-band decomposition (움직임 보상 기법과 분할 대역 기법을 사용한 동영상 부호화 기법)

  • Paek, Hoon;Kim, Rin-Chul;Lee, Sang-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.1
    • /
    • pp.1-16
    • /
    • 1996
  • In this paper, by combining the motion compensated transform coding with the sub-band decomposition technique, we present a motion compensated sub-band coding technique(MCSBC) for image sequence coding. Several problems related to the MCSBC, such as a scheme for motion compensation in each sub-band and the efficient VWL coding of the DCT coefficients in each sub-band are discussed. For an efficient coding, the motion estimation and compensation is performed only on the LL sub-band, but the discrete cosine transform(DCT) is employed to encode all sub-bands in our approach. Then, the transform coefficients in each sub-band are scanned in a different manner depending on the energy distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distributions in the DCT domain, and coded by using separate 2-D Huffman code tables, which are optimized to the probability distribution of each sub-band. The performance of the proposed MCSBC technique is intensively examined by computer simulations on the HDTV image sequences. The simulation results reveal that the proposed MCSBC technique outperforms other coding techniques, especially the well-known motion compensated transform coding technique by about 1.5dB, in terms of the average peak signal to noise ratio.

  • PDF

A Feedback Diffusion Algorithm for Compression of Sensor Data in Sensor Networks (센서 네트워크에서 데이터 압축을 위한 피드백 배포 기법)

  • Yeo, Myung-Ho;Seong, Dong-Ook;Cho, Yong-Jun;Yoo, Jae-Soo
    • Journal of KIISE:Databases
    • /
    • v.37 no.2
    • /
    • pp.82-91
    • /
    • 2010
  • Data compression technique is traditional and effective to reduce network traffic. Generally, sensor data exhibit strong correlation in both space and time. Many algorithms have been proposed to utilize these characteristics. However, each sensor just utilizes neighboring information, because its communication range is restrained. Information that includes the distribution and characteristics of whole sensor data provide other opportunities to enhance the compression technique. In this paper, we propose an orthogonal approach for compression algorithm based on a novel feedback diffusion algorithm in sensor networks. The base station or a super node generates the Huffman code for compression of sensor data and broadcasts it into sensor networks. Every sensor that receives the information compresses their sensor data and transmits them to the base station. We define this approach as feedback-diffusion. In order to show the superiority of our approach, we compare it with the existing aggregation algorithms in terms of the lifetime of the sensor network. As a result, our experimental results show that the whole network lifetime was prolonged by about 30%.

BTC employing a Quad Tree Technique for Image Data Compression (QUAD TREE를 이용한 BTC에서의 영상데이타 압축)

  • 백인기;김해수;조성환;이근영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.13 no.5
    • /
    • pp.390-399
    • /
    • 1988
  • A conventional BTC has the merit of real time processing and simple computation, but has the problem that its compression rate is low. In this paper, a modified BTC using the Quad Tree which is frequently used in binary image is proposed. The method results in the low compression rate by decreasing the total number of subblocks by mean of making the size of a subblock large in the small variation area of graty level and the size af a subblock small in the large variation area of gary level. For the effective transmission of bit plane, the Huffman run-lengh code for the large size of a subblock and the lookup table for tha small size of a subblock are used. The proposed BTC method show the result of coding 256 level image at the average data rate of about 0.8 bit/pixel.

  • PDF

A Study on the Data Compression of the Voice Signal using Multi Wavelet (다중 웨이브렛을 이용한 음성신호 데이터 압축에 관한 연구)

  • Kim, Tae-Hyung;Park, Jae-Woo;Yoon, Dong-Han;Noh, Seok-Ho;Cho, Ig-Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.625-629
    • /
    • 2005
  • According to the rapid development of the information and communication technology, the demand on the efficient compression technology for the multimedia data is increased magnificently. In this Paper, we designed new compression algorithm structure using wavelet base for the compression of ECG signal and audible signal data. We examined the efficiency of the compression between 2-band structure and wavelet packet structure, and investigated the efficiency and reconstruction error by wavelet base function using Daubechies wavelet coefficient and Coiflet coefficient for each structure. Finally, data were compressed further more using Huffman code, and resultant Compression Rate(CR) and Percent Root Mean Square difference(PRD) were compared with those of existent DCT.

  • PDF

VLSI Design of DWT-based Image Processor for Real-Time Image Compression and Reconstruction System (실시간 영상압축과 복원시스템을 위한 DWT기반의 영상처리 프로세서의 VLSI 설계)

  • Seo, Young-Ho;Kim, Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.1C
    • /
    • pp.102-110
    • /
    • 2004
  • In this paper, we propose a VLSI structure of real-time image compression and reconstruction processor using 2-D discrete wavelet transform and implement into a hardware which use minimal hardware resource using ASIC library. In the implemented hardware, Data path part consists of the DWT kernel for the wavelet transform and inverse transform, quantizer/dequantizer, the huffman encoder/huffman decoder, the adder/buffer for the inverse wavelet transform, and the interface modules for input/output. Control part consists of the programming register, the controller which decodes the instructions and generates the control signals, and the status register for indicating the internal state into the external of circuit. According to the programming condition, the designed circuit has the various selective output formats which are wavelet coefficient, quantization coefficient or index, and Huffman code in image compression mode, and Huffman decoding result, reconstructed quantization coefficient, and reconstructed wavelet coefficient in image reconstructed mode. The programming register has 16 stages and one instruction can be used for a horizontal(or vertical) filtering in a level. Since each register automatically operated in the right order, 4-level discrete wavelet transform can be executed by a programming. We synthesized the designed circuit with synthesis library of Hynix 0.35um CMOS fabrication using the synthesis tool, Synopsys and extracted the gate-level netlist. From the netlist, timing information was extracted using Vela tool. We executed the timing simulation with the extracted netlist and timing information using NC-Verilog tool. Also PNR and layout process was executed using Apollo tool. The Implemented hardware has about 50,000 gate sizes and stably operates in 80MHz clock frequency.

In DCT,Image Data Compression via Directional Zonal Filters (DCT 변환상에서 방향성 Zonal 필터를 이용한 화상 데이터 압축)

  • 정동범;김해수;조승환;이근영
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.16 no.2
    • /
    • pp.172-179
    • /
    • 1991
  • In this paper we have proposed an efficient coding algorithm using directional filtering. First an image is transformed by using DCT which has better energy compaction and then the transformed image is divided into a low frequency component and several high frequency components. The transformed coefficients of each parts are transmitted respectively by using huffman code and these are transformed inversely at receiver. For the directional components total edge images are reconstructed at zero crossing points. We are able to reduce the amount of data by getting of complex component and making directional angles 90. As a results, this proposed method is better than that of Kunt in respect of processing time and memories. We have 38dB of image quality with objective measurement of PSNR and 0.26bpp of compression ratio which is acceptable.

  • PDF

A study of Image Compression Algorithm using DCT (DCT를 이용한 영상압축 알고리즘에 관한 연구)

  • 한동호;이준노
    • Journal of Biomedical Engineering Research
    • /
    • v.13 no.4
    • /
    • pp.323-330
    • /
    • 1992
  • A Study of Image Compression Algorithm using DCT This paper describes the system that implement a JPEG(Joint Photographic Experts Group) algorithm based on DCT(Discrete Cosine Transform) uslng CCD kameva, Image Grabber, and IBM PC. After cosine transforms the acquisited image, this algorithm quantize and entropy encode the coefficients by JPEG code table. The coefficients are reconstructed by the Huffman decoding, dequantized procedure, and Inverse cosine transform. The results obtained from the impleulented system are as follows. (1) For effcient storage and easy implementation, this system save Image as a PCX formal (2) Thls system get 7:1 compression ratio(3.8 RMSE value) without large distortion. (3) With a low pass filtering, thls system eliminate high frequency components and get 20% enhanced compression ratio. (4) Thls system enhance the reconstructed Image using histogram modeling.

  • PDF