• Title/Summary/Keyword: Compression Algorithm

Search Result 1,096, Processing Time 0.024 seconds

The Cooperative Parallel X-Match Data Compression Algorithm (협동 병렬 X-Match 데이타 압축 알고리즘)

  • 윤상균
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.10
    • /
    • pp.586-594
    • /
    • 2003
  • X-Match algorithm is a lossless compression algorithm suitable for hardware implementation owing to its simplicity. It can compress 32 bits per clock cycle and is suitable for real time compression. However, as the bus width increases 64-bit, the compression unit also need to increase. This paper proposes the cooperative parallel X-Match (X-MatchCP) algorithm, which improves the compression speed by performing the two X-Match algorithms in parallel. It searches the all dictionary for two words, combines the compression codes of two words generated by parallel X-Match compression and outputs the combined code while the previous parallel X-Match algorithm searches an individual dictionary. The compression ratio in X-MatchCP is almost the same as in X-Match. X-MatchCP algorithm is described and simulated by Verilog hardware description language.

BTC Algorithm Utilizing Compression Method of Bitmap and Quantization data for Image Compression (비트맵과 양자화 데이터 압축 기법을 사용한 BTC 영상 압축 알고리즘)

  • Cho, Moonki;Yoon, Yungsup
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.10
    • /
    • pp.135-141
    • /
    • 2012
  • To reduce frame memory size usage in LCD overdrive, block truncation coding (BTC) image compression is commonly used. For maximization of compression ratio, BTC image compression is need to compress bitmap or quantization data. In this paper, for high compression ratio, we propose CMBQ-BTC (CMBQ : compression method bitmap data and quantization data) algorithm. Experimental results show that proposed algorithm is efficient as compared with PSNR and compression ratio of the conventional BTC method.

A Pattern Matching Extended Compression Algorithm for DNA Sequences

  • Murugan., A;Punitha., K
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.196-202
    • /
    • 2021
  • DNA sequencing provides fundamental data in genomics, bioinformatics, biology and many other research areas. With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Those large volumes of data also require a fast transmission, effective storage, superior functionality and provision of quick access to any record. Data storage costs have a considerable proportion of total cost in the formation and analysis of DNA sequences. In particular, there is a need of highly control of disk storage capacity of DNA sequences but the standard compression techniques unsuccessful to compress these sequences. Several specialized techniques were introduced for this purpose. Therefore, to overcome all these above challenges, lossless compression techniques have become necessary. In this paper, it is described a new DNA compression mechanism of pattern matching extended Compression algorithm that read the input sequence as segments and find the matching pattern and store it in a permanent or temporary table based on number of bases. The remaining unmatched sequence is been converted into the binary form and then it is been grouped into binary bits i.e. of seven bits and gain these bits are been converted into an ASCII form. Finally, the proposed algorithm dynamically calculates the compression ratio. Thus the results show that pattern matching extended Compression algorithm outperforms cutting-edge compressors and proves its efficiency in terms of compression ratio regardless of the file size of the data.

Lossy Image Compression Based on Quad Tree Algorithm and Geometrical Wavelets (사분트리 알고리즘과 기하학적 웨이블렛을 이용한 손실 영상 압축)

  • Chu, Hyung-Suk;An, Chong-Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.58 no.11
    • /
    • pp.2292-2298
    • /
    • 2009
  • In this paper, the lossy image compression algorithm using the quad tree and the bandlets is proposed. The proposed algorithm transforms input images by the discrete wavelet transform (DWT) and represents the geometrical structures of high frequency bands using the bandlets with a 8 block- size. In addition, the proposed algorithm searches the position information of the significant coefficients by using the quad tree algorithm and computes the magnitude and the sign information of the significant coefficients by using the Embedded Image Coding using Zerotrees of Wavelet Coefficients (EZW) algorithm. The compression result by using the quad tree algorithm improves the PSNR performance of high frequency images up to 1 dB, compared to that of JPEG-2000 algorithm and that of S+P algorithm. The PSNR performance by using DWT and bandlets improves up to 7.5dB, compared to that by using only DWT.

Color Image Vector Quantization Using Enhanced SOM Algorithm

  • Kim, Kwang-Baek
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1737-1744
    • /
    • 2004
  • In the compression methods widely used today, the image compression by VQ is the most popular and shows a good data compression ratio. Almost all the methods by VQ use the LBG algorithm that reads the entire image several times and moves code vectors into optimal position in each step. This complexity of algorithm requires considerable amount of time to execute. To overcome this time consuming constraint, we propose an enhanced self-organizing neural network for color images. VQ is an image coding technique that shows high data compression ratio. In this study, we improved the competitive learning method by employing three methods for the generation of codebook. The results demonstrated that compression ratio by the proposed method was improved to a greater degree compared to the SOM in neural networks.

  • PDF

A Study on the Improvement of EZW Algorithm for Lossy Image Compression (손실 압축을 위한 EZW 알고리즘의 개선에 관한 연구)

  • Chu, Hyung-Suk;An, Chong-Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.2
    • /
    • pp.415-419
    • /
    • 2007
  • Data compression is very important for the storage and transmission of informations. EZW image compression algorithm has been widely used in real application due to its high compression performance. In the EZW algorithm, when a new significant coefficient is generated, its children are all encoded, although its all descendants may be insignificant, and thus its performance is declined. In this paper, we proposed an improved EZW algorithm using IS(Isolated Significant) symbol, which checks all descendants of significant coefficient and avoids encoding the children of each newly generated significant coefficient if it has no significant descendant.

BTC Algorithm Utilizing Multi-Level Quantization Method for Image Compression (Multi-Level 양자화 기법을 사용한 BTC 영상 압축 알고리즘)

  • Cho, Moonki;Yoon, Yungsup
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.114-121
    • /
    • 2013
  • BTC image compression is a simple and easy hardware implementation, is widely used in a video compression techniques required for LCD overdrive. In this paper, methods for reducing compression loss, a multi-level quantization BTC (MLQ-BTC) algorithm is proposed. The process of the MLQ-BTC algorithm is, a input image is compressed and decompressed by Quasi 8-level method and Advanced 2-level BTC method, and select the algorithm with the smallest compression loss. Simulation results show that the proposed algorithm is efficient as compared with PSNR and compression ratio of the existing BTC methods.

An Efficient Data Compression Algorithm For Binary Image (Binary Image의 효율적인 데이타 압축 Algorithm에 관한 연구)

  • Kang, Ho-Gab;Lee, Keun-Young
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1375-1378
    • /
    • 1987
  • In this paper, an efficient data compression algorithm for binary image is proposed. This algorithm makes use of the fact that boundaries contain all the information about such images. The compression efficiency is then further increased by efficient coding of Boundary Information Matrix. The comparison of performance with modified Huffman coding was made by a computer simulation with some images. The results of simulation showed that the proposed algorithm was more efficient than modified Huffman code.

  • PDF

A New Method of Lossless Universal Data Compression (새로운 무손실 유니버셜 데이터 압축 기법)

  • Kim, Sung-Soo;Lee, Hae-Kee
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.58 no.3
    • /
    • pp.285-290
    • /
    • 2009
  • In this paper, we propose a new algorithm that improves the lossless data compression rate. The proposed algorithm lessens the redundancy and improves the compression rate evolutionarily around 40 up to 80 percentile depending on the characteristics of binary images used for compression. In order to demonstrate the superiority of the proposed method, the comparison between the proposed method and the LZ78 (LZ77) is demonstrated through experimental results theoretical analysis.

ECG Data Compression Using Wavelet Transform and Adaptive Fractal Interpolation (웨이브렛 변환과 적응 프랙탈 보간을 이용한 심전도 데이터 압축)

  • Lee, W.H.;Yoon, Y.R.;Park, S.J.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.11
    • /
    • pp.221-224
    • /
    • 1996
  • This paper presents the ECG data compression using wavelet transform(WT) and adaptive fractal interpolation(AFI). The WT has the subband coding scheme. The fractal compression method represents any range of ECG signal by fractal interpolation parameters. Specially, the AFI used the adaptive range sizes and got good performance for ECG data compression. In this algorithm, the AFI is applied into the low frequency part of WT. The MIT/BIH arrhythmia data was used for evaluation. The compression rate using WT and AFI algorithm is better than the compression rate using AFI. The WT and AFI algorithm yields compression ratio as high as 21.0 without any entroy coding.

  • PDF