• Title/Summary/Keyword: lossless data compression

Search Result 73, Processing Time 0.028 seconds

An Image Data Compression Algorithm for a Home-Use Digital VCR Using SBC with Block-Adaptive Quantization (SBC와 블럭 적응 양자화를 이용한 가정용 디지탈 VCR 영상 압축 알고리듬)

  • 김주희;서정태;박용철;이제형;윤대희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.124-132
    • /
    • 1994
  • An image data compression method for a digital VCR must satisfy special requirements such as high speed playback. various edting capabilities and error concealment to provide immunity to tape dropouts. Taking these requirements requirements into consideration, this paper proposes a new interframe subband coding algorithm for a digital VCR. In the proposed method, continuous input images are fist partitioned into four frequency bands. The lowest frequency subband is coded with 3-D block adaptive quantization that removes the level redundancy within each level. The other higher frequency subbands are coded by an intraframe coding method using the property of the human visual system. To keep reasonable image quality in high speed palyback, a segment forming method in the frequency domaing is also proposed Computer simulation results demonstrate that the proposed algorithm has the potential of achieving virtually lossless compression in normal play and produces an image with less mosaic errors in high speed play.

  • PDF

Side-Channel Archive Framework Using Deep Learning-Based Leakage Compression (딥러닝을 이용한 부채널 데이터 압축 프레임 워크)

  • Sangyun Jung;Sunghyun Jin;Heeseok Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.379-392
    • /
    • 2024
  • With the rapid increase in data, saving storage space and improving the efficiency of data transmission have become critical issues, making the research on the efficiency of data compression technologies increasingly important. Lossless algorithms can precisely restore original data but have limited compression ratios, whereas lossy algorithms provide higher compression rates at the expense of some data loss. There has been active research in data compression using deep learning-based algorithms, especially the autoencoder model. This study proposes a new side-channel analysis data compressor utilizing autoencoders. This compressor achieves higher compression rates than Deflate while maintaining the characteristics of side-channel data. The encoder, using locally connected layers, effectively preserves the temporal characteristics of side-channel data, and the decoder maintains fast decompression times with a multi-layer perceptron. Through correlation power analysis, the proposed compressor has been proven to compress data without losing the characteristics of side-channel data.

An Efficient Medical Image Compression Considering Brain CT Images with Bilateral Symmetry (뇌 CT 영상의 대칭성을 고려한 관심영역 중심의 효율적인 의료영상 압축)

  • Jung, Jae-Sung;Lee, Chang-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.39-54
    • /
    • 2012
  • Picture Archiving and Communication System (PACS) has been planted as one of the key infrastructures with an overall improvement in standards of medical informationization and the stream of digital hospitalization in recent days. The kind and data of digital medical imagery are also increasing rapidly in volume. This trend emphasizes the medical image compression for storing large-scale medical image data. Digital Imaging and Communications in Medicine (DICOM), de facto standard in digital medical imagery, specifies Run Length Encode (RLE), which is the typical lossless data compressing technique, for the medical image compression. However, the RLE is not appropriate approach for medical image data with bilateral symmetry of the human organism. we suggest two preprocessing algorithms that detect interested area, the minimum bounding rectangle, in a medical image to enhance data compression efficiency and that re-code image pixel values to reduce data size according to the symmetry characteristics in the interested area, and also presents an improved image compression technique for brain CT imagery with high bilateral symmetry. As the result of experiment, the suggested approach shows higher data compression ratio than the RLE compression in the DICOM standard without detecting interested area in images.

Comparison and analysis of compression algorithms to improve transmission efficiency of manufacturing data (제조 현장 데이터 전송효율 향상을 위한 압축 알고리즘 비교 및 분석)

  • Lee, Min Jeong;Oh, Sung Bhin;Kim, Jin Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.94-103
    • /
    • 2022
  • As a large amount of data generated by sensors or devices at the manufacturing site is transmitted to the server or client, problems arise in network processing time delay and storage resource cost increase. To solve this problem, considering the manufacturing site, where real-time responsiveness and non-disruptive processes are essential, QRC (Quotient Remainder Compression) and BL_beta compression algorithms that enable real-time and lossless compression were applied to actual manufacturing site sensor data for the first time. As a result of the experiment, BL_beta had a higher compression rate than QRC. As a result of experimenting with the same data by slightly adjusting the data size of QRC, the compression rate of the QRC algorithm with the adjusted data size was 35.48% and 20.3% higher than the existing QRC and BL_beta compression algorithms.

Overlapped-Subcube: A Lossless Compression Method for Prefix-Sun Cubes (중첩된-서브큐브: 전위-합 큐브를 위한 손실 없는 압축 방법)

  • 강흠근;민준기;전석주;정진완
    • Journal of KIISE:Databases
    • /
    • v.30 no.6
    • /
    • pp.553-560
    • /
    • 2003
  • A range-sum query is very popular and becomes important in finding trends and in discovering relationships between attributes in diverse database applications. It sums over the selected cells of an OLAP data cube where target cells are decided by specified query ranges. The direct method to access the data cube itself forces too many cells to be accessed, therefore it incurs severe overheads. The prefix-sum cube was proposed for the efficient processing of range-sum queries in OLAP environments. However, the prefix-sum cube has been criticized due to its space requirement. In this paper, we propose a lossless compression method called the overlapped-subcube that is developed for the purpose of compressing prefix-sum cubes. A distinguished feature of the overlapped-subcube is that searches can be done without decompressing. The overlapped-subcube reduces the space requirement for storing prefix-sum cubes, and improves the query performance.

Context-Based Minimum MSE Prediction and Entropy Coding for Lossless Image Coding

  • Musik-Kwon;Kim, Hyo-Joon;Kim, Jeong-Kwon;Kim, Jong-Hyo;Lee, Choong-Woong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.83-88
    • /
    • 1999
  • In this paper, a novel gray-scale lossless image coder combining context-based minimum mean squared error (MMSE) prediction and entropy coding is proposed. To obtain context of prediction, this paper first defines directional difference according to sharpness of edge and gradients of localities of image data. Classification of 4 directional differences forms“geometry context”model which characterizes two-dimensional general image behaviors such as directional edge region, smooth region or texture. Based on this context model, adaptive DPCM prediction coefficients are calculated in MMSE sense and the prediction is performed. The MMSE method on context-by-context basis is more in accord with minimum entropy condition, which is one of the major objectives of the predictive coding. In entropy coding stage, context modeling method also gives useful performance. To reduce the statistical redundancy of the residual image, many contexts are preset to take full advantage of conditional probability in entropy coding and merged into small number of context in efficient way for complexity reduction. The proposed lossless coding scheme slightly outperforms the CALIC, which is the state-of-the-art, in compression ratio.

Performance Evaluation of ECG Compression Algorithms using Classification of Signals based PQSRT Wave Features (PQRST파 특징 기반 신호의 분류를 이용한 심전도 압축 알고리즘 성능 평가)

  • Koo, Jung-Joo;Choi, Goang-Seog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.313-320
    • /
    • 2012
  • An ECG(Electrocardiogram) compression can increase the processing speed of system as well as reduce amount of signal transmission and data storage of long-term records. Whereas conventional performance evaluations of loss or lossless compression algorithms measure PRD(Percent RMS Difference) and CR(Compression Ratio) in the viewpoint of engineers, this paper focused on the performance evaluations of compression algorithms in the viewpoint of diagnostician who diagnosis ECG. Generally, for not effecting the diagnosis in the ECG compression, the position, length, amplitude and waveform of the restored signal of PQRST wave should not be damaged. AZTEC, a typical ECG compression algorithm, is validated its effectiveness in conventional performance evaluation. In this paper, we propose novel performance evaluation of AZTEC in the viewpoint of diagnostician.

Progressive Image Transmission Using Hierarchical Pyramid Structure and Classified Vector Quantizer in DCT Domain (계층적 피라미드 구조와 DCT 영역에서의 분류 벡터 양지기를 이용한 점진적 영상전송)

  • 박섭형;이상욱
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.8
    • /
    • pp.1227-1237
    • /
    • 1989
  • In this paper, we propose a lossless progressive image transmission scheme using hierarchical pyramid structure and classified vector quantizer in DCT domain. By adopting DCT to the hierarchical pyramid signals, we can reduce the spatial redundance. Moreover, the DCT coefficients can be encoded efficiently by using classified vector quantizer in DCT domain. The classifier is simply based on the variance of a subblock. Also, the mirror set of training set of images can improve the robustness of codebooks. Progressive image transmission can be achieved through following processes: from top to bottom level of planes in a pyramid, and from high to low AC variance class in a plane. Some simulation results with real images show that the proposed coding scheme yields a good performance at below 0.3 bpp and an excellent result at 0.409 bpp. The proposed coding scheme is well suited for lossless progressive image transmission as well as image data compression.

  • PDF

A Lossless and Lossy Audio Compression using Prediction Model and Wavelet Transform

  • Park, Se-Yil;Park, Se-Hyoung;Lim, Dae-Sik;Jaeho Shin
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.2063-2066
    • /
    • 2002
  • In this paper, we propose a structure far lossless audio coding method. Prediction model is used in the wavelet transform domain. After DWT, wavelet coefficients is quantized and decorrelated by prediction modeling. The DWT can be constructed to critical bands. We can get a lower data rate representation of audio signal which has a good quality like the result of perceptual coding. Then the prediction errors are efficiently coded by the Golomb-coding method. The prediction coefficients are fixed for reducing the computational burden when we find prediction coefficients.

  • PDF

A study on optimal Image Data Multiresolution Representation and Compression Through Wavelet Transform (Wavelet 변환을 이용한 최적 영상 데이터 다해상도 표현 및 압축에 관한 연구)

  • Kang, Gyung-Mo;Jeoung, Ki-Sam;Lee, Myoung-Ho
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1994 no.12
    • /
    • pp.31-38
    • /
    • 1994
  • This paper proposed signal decomposition and multiresolution representation through wavelet transform using wavelet orthonormal basis. And it suggested most appropriate filter for scaling function in multiresoltion representation and compared two compression method, arithmetic coding and Huffman coding. Results are as follows 1. Daub18 coefficient is most appropriate in computing time, energy compaction, image quality. 2. In case of image browsing that should be small in size and good for recognition, it is reasonable to decompose to 3 scale using pyramidal algorithm. 3. For the case of progressive transmittion where requires most grateful image reconstruction from least number of sampls or reconstruction at any target rate, I embedded the data in order of significance after scaling to 5 step. 4. Medical images such as information loss is fatal have to be compressed by lossless method. As a result from compressing 5 scaled data through arithmetic coding and Huffman coding, I obtained that arithmetic coding is better than huffman coding in processing time and compression ratio. And in case of arithmetic coding I could compress to 38% to original image data.

  • PDF