• Title/Summary/Keyword: image data compression

Search Result 561, Processing Time 0.032 seconds

BTC Algorithm Utilizing Compression Method of Bitmap and Quantization data for Image Compression (비트맵과 양자화 데이터 압축 기법을 사용한 BTC 영상 압축 알고리즘)

  • Cho, Moonki;Yoon, Yungsup
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.10
    • /
    • pp.135-141
    • /
    • 2012
  • To reduce frame memory size usage in LCD overdrive, block truncation coding (BTC) image compression is commonly used. For maximization of compression ratio, BTC image compression is need to compress bitmap or quantization data. In this paper, for high compression ratio, we propose CMBQ-BTC (CMBQ : compression method bitmap data and quantization data) algorithm. Experimental results show that proposed algorithm is efficient as compared with PSNR and compression ratio of the conventional BTC method.

LOSSY JPEG CHARACTERISTIC ANALYSIS OF METEOROLOGICAL SATELLITE IMAGE

  • Kim, Tae-Hoon;Jeon, Bong-Ki;Ahn, Sang-Il;Kim, Tae-Young
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.282-285
    • /
    • 2006
  • This paper analyzed the characteristics of the Lossy JPEG of the meteorological satellite image, and analyzed the quality of the Lossy JPEG compression, which is proper for the LRIT(Low Rate Information Transmission) to be serviced to the SDUS(Small-scale Data Utilization Station) system of the COMS(Communication, Oceans, Meteorological Satellite). Since COMS is to start running after 2008, we collected the data of the MTSAT-1R(Multi-functional Transport Satellite -1R) for analysis, and after forming the original image to be used to LRIT by each channel and time zone of the satellite image data, we set the different quality with the Lossy JPEG compression, and compressed the original data. For the characteristic analysis of the Lossy JPEG, we measured PSNR(Peak Signal to Noise Rate), compression rate and the time spent in compression following each quality of Lossy JPEG compression. As a result of the analysis of the satellite image data of the MTSAT-1R, the ideal quality of the Lossy JPEG compression was found to be 90% in the VIS Channel, 85% in the IR1 Channel, 80% in the IR2 Channel, 90% in the IR3 Channel and 90% in the IR4 Channel.

  • PDF

Directional Postprocessing Techniques to Improve Image Quality in Wavelet-based Image Compression (웨이블릿 기반 압축영상의 화질 향상을 위한 방향성 후처리 기법)

  • 김승종;정제창
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1028-1040
    • /
    • 2000
  • Since image data has large data amount, proper image compression is necessary to transmit and store the data efficiently. Image compression brings about bit rate reduction but results in some artifacts. This artifacts are blocking artifacts, mosquito noise, which are observed in DCT based compression image, and ringing artifacts, which is perceived around the edges in wavelet based compression image. In this paper, we propose directional postprocessing technique which improved the decoded image quality using the fact that human vision is sensible to ringing artifacts around the edges of image. First we detect the edge direction in each block. Next we perform directional postprocessing according to detected edge direction. Proposed method is that the edge direction is block. Next performed directional postprocessing according to detected edge direction. If the correlation coefficients are equivalent to each directions, postprocessing is not performed. So, time of the postproces ing brings about shorten.

  • PDF

A study on the Image Signal Compress using SOM with Isometry (Isometry가 적용된 SOM을 이용한 영상 신호 압축에 관한 연구)

  • Chang, Hae-Ju;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.358-360
    • /
    • 2004
  • The digital images contain a significant amount of redundancy and require a large amount of data for their storage and transmission. Therefore, the image compression is necessary to treat digital images efficiently. The goal of image compression is to reduce the number of bits required for their representation. The image compression can reduce the size of image data using contractive mapping of original image. Among the compression methods, the mapping is affine transformation to find the block(called range block) which is the most similar to the original image. In this paper, we applied the neural network(SOM) in encoding. In order to improve the performance of image compression, we intend to reduce the similarities and unnecesaries comparing with the originals in the codebook. In standard image coding, the affine transform is performed with eight isometries that used to approximate domain blocks to range blocks.

  • PDF

An implementation of DWT Encoder design for image compression (영상 압축을 위한 DWT Encoder 설계)

  • 이강현
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.491-494
    • /
    • 1999
  • Introduction of digital communication network such as Integrated Services Digital Networks(ISDN) and digital storage media have rapidly developed. Due to a large amount of image data, compression is the key techniques in still image and video using digital signal processing for transmitting and storing. Digital image compression provides solutions for various image applications that represent digital image requiring a large amount of data. In this paper, the proposed DWT(Discrete Wavelet Transform) filter bank is consisted of simple architecture, but it is efficiently designed that a user obtain a wanted compression rate as only input parameter. If it is implemented by FPGA chip, the designed encoder operates in 12MHz.

  • PDF

Block Truncation Coding using Reduction Method of Chrominance Data for Color Image Compression (색차 데이터 축소 기법을 사용한 BTC (Block Truncation Coding) 컬러 이미지 압축)

  • Cho, Moon-Ki;Yoon, Yung-Sup
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.3
    • /
    • pp.30-36
    • /
    • 2012
  • block truncation coding(BTC) image compression is known as a simple and efficient technology for image compression algorithm. In this paper, we propose RMC-BTC algorithm(RMC : reduction method chrominace data) for color image compression. To compress chrominace data, in every BTC block, the RMC-BTC coding employs chrominace data expressed with average of chrominace data and using method of luminance data bit-map to represented chrominance data bit-map. Experimental results shows efficiency of proposed algorithm, as compared with PSNR and compression ratio of the conventional BTC method.

An Efficient Medical Image Compression Considering Brain CT Images with Bilateral Symmetry (뇌 CT 영상의 대칭성을 고려한 관심영역 중심의 효율적인 의료영상 압축)

  • Jung, Jae-Sung;Lee, Chang-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.39-54
    • /
    • 2012
  • Picture Archiving and Communication System (PACS) has been planted as one of the key infrastructures with an overall improvement in standards of medical informationization and the stream of digital hospitalization in recent days. The kind and data of digital medical imagery are also increasing rapidly in volume. This trend emphasizes the medical image compression for storing large-scale medical image data. Digital Imaging and Communications in Medicine (DICOM), de facto standard in digital medical imagery, specifies Run Length Encode (RLE), which is the typical lossless data compressing technique, for the medical image compression. However, the RLE is not appropriate approach for medical image data with bilateral symmetry of the human organism. we suggest two preprocessing algorithms that detect interested area, the minimum bounding rectangle, in a medical image to enhance data compression efficiency and that re-code image pixel values to reduce data size according to the symmetry characteristics in the interested area, and also presents an improved image compression technique for brain CT imagery with high bilateral symmetry. As the result of experiment, the suggested approach shows higher data compression ratio than the RLE compression in the DICOM standard without detecting interested area in images.

Adaptive Prediction for Lossless Image Compression

  • Park, Sang-Ho
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.169-172
    • /
    • 2005
  • Genetic algorithm based predictor for lossless image compression is propsed. We describe a genetic algorithm to learn predictive model for lossless image compression. The error image can be further compressed using entropy coding such as Huffman coding or arithmetic coding. We show that the proposed algorithm can be feasible to lossless image compression algorithm.

  • PDF

Multi-Description Image Compression Coding Algorithm Based on Depth Learning

  • Yong Zhang;Guoteng Hui;Lei Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.232-239
    • /
    • 2023
  • Aiming at the poor compression quality of traditional image compression coding (ICC) algorithm, a multi-description ICC algorithm based on depth learning is put forward in this study. In this study, first an image compression algorithm was designed based on multi-description coding theory. Image compression samples were collected, and the measurement matrix was calculated. Then, it processed the multi-description ICC sample set by using the convolutional self-coding neural system in depth learning. Compressing the wavelet coefficients after coding and synthesizing the multi-description image band sparse matrix obtained the multi-description ICC sequence. Averaging the multi-description image coding data in accordance with the effective single point's position could finally realize the compression coding of multi-description images. According to experimental results, the designed algorithm consumes less time for image compression, and exhibits better image compression quality and better image reconstruction effect.

A Preprocessing Algorithm for Efficient Lossless Compression of Gray Scale Images

  • Kim, Sun-Ja;Hwang, Doh-Yeun;Yoo, Gi-Hyoung;You, Kang-Soo;Kwak, Hoon-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2485-2489
    • /
    • 2005
  • This paper introduces a new preprocessing scheme to replace original data of gray scale images with particular ordered data so that performance of lossless compression can be improved more efficiently. As a kind of preprocessing technique to maximize performance of entropy encoder, the proposed method converts the input image data into more compressible form. Before encoding a stream of the input image, the proposed preprocessor counts co-occurrence frequencies for neighboring pixel pairs. Then, it replaces each pair of adjacent gray values with particular ordered numbers based on the investigated co-occurrence frequencies. When compressing ordered image using entropy encoder, we can expect to raise compression rate more highly because of enhanced statistical feature of the input image. In this paper, we show that lossless compression rate increased by up to 37.85% when comparing results from compressing preprocessed and non-preprocessed image data using entropy encoder such as Huffman, Arithmetic encoder.

  • PDF