• Title/Summary/Keyword: Compression Algorithm

Search Result 1,096, Processing Time 0.029 seconds

An Error-Resilient Image Compression Base on the Zerotree Wavelet Algorithm (오류에 강인한 제로트리 웨이블릿 영상 압축)

  • 장우영;송환종;손광훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.7A
    • /
    • pp.1028-1036
    • /
    • 2000
  • In this paper, an error-resilient image compression technique using wavelet transform is proposed. The zerotree technique that uses properties of statistics, energy and directions of wavelet coefficients in the space-frequency domain shows effective compression results. Since it is highly sensitive to the propagation of channel errors, evena single bit error degrades the whole image quality severely. In the proposed algorithm, the image is encoded by the SPIHT(Set Partitioning in Hierarchical Trees) algorithm using the zerotree coding technique. Encoded bitstreams are partitioned into some blocks using the subband correlations and then fixed-length blocks are made by using the effective bit reorganization algorithm. finally, an effective bit allocation technique is used to limit error propagation in each block. Therefore, in low BER the proposed algorithm shows similar compression performance to the zerotree compression technique and in high BER it shows better performance in terms of PSNR than the conventional methods.

  • PDF

Optimum Design of Packaging Container for Bulk Materials(I)-Algorithm Development (벌크화물용 포장용기의 최적 설계(I)-알고리즘 개발)

  • Park, Jong-Min;Kwon, Soon-Goo
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.6 no.1
    • /
    • pp.1-11
    • /
    • 2000
  • In optimum design of packaging container for bulk materials, minimum board area, compression performance and distribution efficiency must be considered. In this study, mathematical models for minimum board area (RMA), compression strength (CS) and maximum compression strength per unit board area (MCSA) of container as algorithm for optimum design of packaging conatiner for bulk materials were developed as follows : RMA=f(V,D), ${\alpha}_{RMA}=f(V,D)$, MCSA=f(V,D), and ${\alpha}_{MCSA}=f(V,D)$. In order to develop these models, compression test according to various dimensions of container and response surface analysis for minimum board area, compression strength, and maximum compression strength per unit board area of container were carried out. In developed models, volume and depth of container were principal independent variables. Through the verified results for these models, optimum design of packaging container on the design conditions and limit conditions was possible. These models might be used in developing optimum design software of packaging container for bulk materials.

  • PDF

Energy Efficient and Low-Cost Server Architecture for Hadoop Storage Appliance

  • Choi, Do Young;Oh, Jung Hwan;Kim, Ji Kwang;Lee, Seung Eun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4648-4663
    • /
    • 2020
  • This paper proposes the Lempel-Ziv 4(LZ4) compression accelerator optimized for scale-out servers in data centers. In order to reduce CPU loads caused by compression, we propose an accelerator solution and implement the accelerator on an Field Programmable Gate Array(FPGA) as heterogeneous computing. The LZ4 compression hardware accelerator is a fully pipelined architecture and applies 16 dictionaries to enhance the parallelism for high throughput compressor. Our hardware accelerator is based on the 20-stage pipeline and dictionary architecture, highly customized to LZ4 compression algorithm and parallel hardware implementation. Proposing dictionary architecture allows achieving high throughput by comparing input sequences in multiple dictionaries simultaneously compared to a single dictionary. The experimental results provide the high throughput with intensively optimized in the FPGA. Additionally, we compare our implementation to CPU implementation results of LZ4 to provide insights on FPGA-based data centers. The proposed accelerator achieves the compression throughput of 639MB/s with fine parallelism to be deployed into scale-out servers. This approach enables the low power Intel Atom processor to realize the Hadoop storage along with the compression accelerator.

Depth Compression for Multi-View Sequences Using 3-D Mesh Representation (3-D 메쉬 모델을 이용한 다시점 영상의 깊이 정보 압축)

  • Jung, Il-Lyong;Kim, Chang-Su
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.203-204
    • /
    • 2007
  • In this work, we propose a compression algorithm for depth images, which are obtained from multi-view sequences. The proposed algorithm represents a depth image using a 3-D regular triangular mesh and predictively encodes the mesh vertices using a linear prediction scheme. The prediction errors are encoded with a arithmetic coder. Simulation results demonstrate that the proposed algorithm provides better performances than the JPEG2000 lossless coder.

  • PDF

Evaluation for Applications of the Levenberg-Marquardt Algorithm in Geotechnical Engineering (Levenberg-Marquardt 알고리즘의 지반공학 적용성 평가)

  • Kim, Youngsu;Kim, Daeman
    • Journal of the Korean GEO-environmental Society
    • /
    • v.10 no.5
    • /
    • pp.49-57
    • /
    • 2009
  • In this study, one of the complicated geotechnical problem, compression index was predicted by a artificial neural network method of Levenberg-Marquardt (LM) algorithm. Predicted values were compared and evaluated by the results of the Back Propagation (BP) method, which is used extensively in geotechnical engineering. Also two different results were compared with experimental values estimated by verified experimental methods in order to evaluate the accuracy of each method. The results from experimental method generally showed higher error than the results of both artificial neural network method. The predicted compression index by LM algorithm showed better comprehensive results than BP algorithm in terms of convergence, but accuracy was similar each other.

  • PDF

A Fast Algorithm for Fractal Image Coding

  • Kim, Jeong-Il;Kwak, Seung-Uk;Jeong, Keun-Won;Song, In-Keun;Yoo, Choong-Yeol;Lee, Kwang-Bae;Kim, Hyen-Ug
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 1998.06a
    • /
    • pp.521-525
    • /
    • 1998
  • In this paper, we propose a fast algorithm for fractal image coding to shorten long time to take on fractal image encoding. For its performance evaluation, the algorithm compares with other traditional fractal coding methods. In the traditional fractal image coding methods, an original image is contracted by a factor in order to make an image to be matched. Then, the whole area of the contracted image is searched in order to find contractive transformation point of the original image corresponding to the contacted image. It needs a lot of searching time on encoding and remains limitation in the improvement of compression ratio. However, the proposed algorithm not only considerably reduces encoding tin e by using scaling method and limited search area method but also improves compression ratio by using bit-plane. When comparing the proposed algorithm with Jacquin's method, the proposed algorithm provides much shorter encoding time and better compression ratio with a little degradation of the decoded image quality than Jacquin's method.

  • PDF

A Buffer-constrained Adaptive Quantization Algorithm for Image Compression (버퍼제약에 의한 영상압축 적응양자화 알고리듬)

  • 박대철;정두영
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.3
    • /
    • pp.249-254
    • /
    • 2002
  • We consider a buffer-constrained adaptive quantization algorithm for image compression. Buffer control algorithm was considered with source coding scheme by some researchers and recently a formal description of the algorithm in terms of rate-distortion has been developed. We propose a buffer control algorithm that incorporates the buffer occupancy into the Lagrange multiplier form in a rate-distortion cost measure. Although the proposed algorithm provides the suboptimal performance as opposed to the optimal Vieterbi algorithm, it can be implemented with very low computaional complexity. In addition stability of this buffer control algorithm has been mentioned briefly using Liapnov stability theory.

  • PDF

Vector Quantization for Medical Image Compression Based on DCT and Fuzzy C-Means

  • Supot, Sookpotharom;Nopparat, Rantsaena;Surapan, Airphaiboon;Manas, Sangworasil
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.285-288
    • /
    • 2002
  • Compression of magnetic resonance images (MRI) has proved to be more difficult than other medical imaging modalities. In an average sized hospital, many tora bytes of digital imaging data (MRI) are generated every year, almost all of which has to be kept. The medical image compression is currently being performed by using different algorithms. In this paper, Fuzzy C-Means (FCM) algorithm is used for the Vector Quantization (VQ). First, a digital image is divided into subblocks of fixed size, which consists of 4${\times}$4 blocks of pixels. By performing 2-D Discrete Cosine Transform (DCT), we select six DCT coefficients to form the feature vector. And using FCM algorithm in constructing the VQ codebook. By doing so, the algorithm can make good time quality, and reduce the processing time while constructing the VQ codebook.

  • PDF

An Image Compression Technique with Lifting Scheme and PVQ (Lifting Scheme과 PVQ를 이용한 영상압축 기법)

  • 정전대;김학렬;신재호
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06a
    • /
    • pp.159-163
    • /
    • 1996
  • In this paper, a new image compression technique, which uses lifting scheme and pyramid vector quantization, is proposed. Lifting scheme is a new technique to generate wavelets and to perform wavelet transform, and pyramid vector quantization is a kind of vector quantization which dose not have codebook neither codebook generation algorithm. For the purpose of realizing more compression rate, an arithmetic entropy coder is used. Proposed algorithm is compared with other wavelet based image coder and with JPEG which uses DCT and adaptive Huffman entropy coder. Simulation results showed that the performance of proposed algorithm is much better than that of others in point of PSNR and bpp.

  • PDF

Predictor Switching Algorithm for Lossless Compression (무손실 압축을 위한 예측기 스위칭 알고리즘)

  • Kim, Young-Ro;Yi, Joon-Hwan
    • 전자공학회논문지 IE
    • /
    • v.47 no.2
    • /
    • pp.27-31
    • /
    • 2010
  • In this paper, a predictor switching algorithm for lossless compression is proposed. It uses adaptively one of two predictors using errors obtained by MED(median edge detector) and GAP(gradient adaptive prediction). The reduced error is measured by existing entropy method. Experimental results show that the proposed algorithm can compress higher than existing predictive methods.