• Title/Summary/Keyword: Lossless compression

Search Result 198, Processing Time 0.022 seconds

An Improvement on Computation Cost and Compression Ratio of Vector Quantization (벡터양자화에서의 계산량과 압축률의 개선)

  • Jung, Il-Hwan;Hong, Choong-Seon;Lee, Dae-Young
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.11
    • /
    • pp.3462-3469
    • /
    • 2000
  • In this paper,new image vector quantization method for improvemtnt computation cost and compression ratio is proposed. A proposed method could saved the cornputatio cost of codebook eneration and encoding using partial codebook search, partial codevector elements, and interuplion criterion. And to improve cornpression ratio of codegook index lossless coding, codebook rearrangement, and variable length coding scheme are used.

  • PDF

An Efficient Medical Image Compression Considering Brain CT Images with Bilateral Symmetry (뇌 CT 영상의 대칭성을 고려한 관심영역 중심의 효율적인 의료영상 압축)

  • Jung, Jae-Sung;Lee, Chang-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.39-54
    • /
    • 2012
  • Picture Archiving and Communication System (PACS) has been planted as one of the key infrastructures with an overall improvement in standards of medical informationization and the stream of digital hospitalization in recent days. The kind and data of digital medical imagery are also increasing rapidly in volume. This trend emphasizes the medical image compression for storing large-scale medical image data. Digital Imaging and Communications in Medicine (DICOM), de facto standard in digital medical imagery, specifies Run Length Encode (RLE), which is the typical lossless data compressing technique, for the medical image compression. However, the RLE is not appropriate approach for medical image data with bilateral symmetry of the human organism. we suggest two preprocessing algorithms that detect interested area, the minimum bounding rectangle, in a medical image to enhance data compression efficiency and that re-code image pixel values to reduce data size according to the symmetry characteristics in the interested area, and also presents an improved image compression technique for brain CT imagery with high bilateral symmetry. As the result of experiment, the suggested approach shows higher data compression ratio than the RLE compression in the DICOM standard without detecting interested area in images.

Comparison and analysis of compression algorithms to improve transmission efficiency of manufacturing data (제조 현장 데이터 전송효율 향상을 위한 압축 알고리즘 비교 및 분석)

  • Lee, Min Jeong;Oh, Sung Bhin;Kim, Jin Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.94-103
    • /
    • 2022
  • As a large amount of data generated by sensors or devices at the manufacturing site is transmitted to the server or client, problems arise in network processing time delay and storage resource cost increase. To solve this problem, considering the manufacturing site, where real-time responsiveness and non-disruptive processes are essential, QRC (Quotient Remainder Compression) and BL_beta compression algorithms that enable real-time and lossless compression were applied to actual manufacturing site sensor data for the first time. As a result of the experiment, BL_beta had a higher compression rate than QRC. As a result of experimenting with the same data by slightly adjusting the data size of QRC, the compression rate of the QRC algorithm with the adjusted data size was 35.48% and 20.3% higher than the existing QRC and BL_beta compression algorithms.

Framework Implementation of Image-Based Indoor Localization System Using Parallel Distributed Computing (병렬 분산 처리를 이용한 영상 기반 실내 위치인식 시스템의 프레임워크 구현)

  • Kwon, Beom;Jeon, Donghyun;Kim, Jongyoo;Kim, Junghwan;Kim, Doyoung;Song, Hyewon;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1490-1501
    • /
    • 2016
  • In this paper, we propose an image-based indoor localization system using parallel distributed computing. In order to reduce computation time for indoor localization, an scale invariant feature transform (SIFT) algorithm is performed in parallel by using Apache Spark. Toward this goal, we propose a novel image processing interface of Apache Spark. The experimental results show that the speed of the proposed system is about 3.6 times better than that of the conventional system.

Near-lossless Coding of Multiview Texture and Depth Information for Graphics Applications (그래픽스 응용을 위한 다시점 텍스처 및 깊이 정보의 근접 무손실 부호화)

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.41-48
    • /
    • 2009
  • This Paper introduces representation and coding schemes of multiview texture and depth data for complex three-dimensional scenes. We represent input color and depth images using compressed texture and depth map pairs. The proposed X-codec encodes them further to increase compression ratio in a near-lossless way. Our system resolves two problems. First, rendering time and output visual quality depend on input image resolutions rather than scene complexity since a depth image-based rendering techniques is used. Second, the random access problem of conventional image-based rendering could be effectively solved using our image block-based compression schemes. From experimental results, the proposed approach is useful to graphics applications because it provides multiview rendering, selective decoding, and scene manipulation functionalities.

A Design of Hybrid Lossless Audio Coder (Hybrid 무손실 오디오 부호화기의 설계)

  • 박세형;신재호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.253-260
    • /
    • 2004
  • This paper proposes a novel algorithm for hybrid lossless audio coding, which employs an integer wavelet transform and a linear prediction model. The proposed algorithm divides the input signal into flames of a proper length, decorrelates the framed data using the integer wavelet transform and linear prediction and finally entropy-codes the frame data. In particular, the adaptive Golomb-Rice coding method used for the entropy coding selects an optimal option which gives the best compression efficiency. Since the proposed algorithm uses integer operations, it significantly improves the computation speed in comparison with an algorithm using real or floating-point operations. When the coding algorithm is implemented in hardware, the system complexity as well as the power consumption is remarkably reduced. Finally, because each frame is independently coded and is byte-aligned with respect to the frame header, it is convenient to move, search, and edit the coded, compressed data.

53.1 Low power and low EMI display technologies based on the total image systematic approach

  • Okumura, Haruhiko;Baba, Masahiro;Takagi, Ayako;Sasaki, Hisashi;Matsuba, Mitsunori
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2009.10a
    • /
    • pp.1081-1085
    • /
    • 2009
  • We have already developed EMI reducing techniques using lossless compression by vertically differential EMI suppression method (VDE[1]). It applies lossless modulo reduction and data bit mapping optimization for low voltage differential signaling (LVDS) transmission lines, that reduces the probability of transient bit and EMI by 12 dB [6][7]. We also improved and optimized the VDE for low power LCD interface. With this modified VDE algorithm[8], the developed FPGA was measured the reduction of the power consumption of LCD circuit by more than 15 % compared to the conventional methods in the case of 14-in LCD with SXGA resolution. The VDE algorithm is based on the total image systematic approach. In the VDE method, the present image signals are subtracted for the 1H delayed image signals and transferred to a column driver through a PCB. As the vertical correlations for image signals are very high, we expected that most of the vertically subtracted image signals remain 0 level and transient cycles become very long. As a result, the power consumption and EMI are extremely reduced for the transferred image signals on a PCB. In this paper, we discussed our proposed method by emphasizing the fact that systematic approach are important based on not only display point of view but also total system point of view.

  • PDF

Lossless Image Compression Based on Deep Learning (딥 러닝 기반의 무손실 영상압축 방법)

  • Rhee, Hochang;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.67-70
    • /
    • 2022
  • 최근 딥러닝 방법의 발전하면서 영상처리 및 컴퓨터 비전의 다양한 분야에서 딥러닝 기반의 알고리즘들이 그 이전의 방법들에 비하여 큰 성능 향상을 보이고 있다. 손실 영상 압축의 경우 최근 encoder-decoder 형태의 네트웍이 영상 압축에서 사용되는 transform을 대체하고 있고, transform 결과들의 엔트로피 코딩을 위한 추가적인 encoder-decoder 네트웍을 사용하여 HEVC 수준에 버금가는 성능을 내고 있다. 무손실 압축의 경우에도 매 픽셀 예측을 CNN으로 수행하는 경우, 기존의 예측방법들에 비하여 예측성능이 크게 향상되어 JPEG-2000 Lossless, FLIF, JEPG-XL 등의 딥러닝을 사용하지 않는 방법들에 비하여 우수한 성능을 내는 것으로 보고되고 있다. 그러나 모든 픽셀에 대하여 예측값을 CNN을 통하여 계산하는 방법은, 영상의 픽셀 수 만큼 CNN을 수행해야 하므로 HD 크기 영상에 대하여 지금까지 알려진 가장 빠른 방법이 한 시간 이상 소요되는 등 비현실적인 것으로 알려져 있다. 따라서 최근에는 성능은 이보다 떨어지지만 속도를 현실적으로 줄인 방법들이 제안되고 있다. 이러한 방법들은 초기에는 FLIF나 JPEG-XL에 비하여 성능이 떨어져서, GPU를 사용하면서도 기존의 방법보다 좋지 않은 성능을 보인다는 면에서 여전히 비현실적이었다. 최근에는 신호의 특성을 더 잘 활용하는 방법들이 제안되면서 매 픽셀마다 CNN을 수행하는 방법보다는 성능이 떨어지지만, 짧은 시간 내에 FLIF나 JPEG-XL보다는 좋은 성능을 내는 현실적인 방법들이 제안되었다. 본 연구에서는 이러한 최근의 몇 가지 방법들을 살펴보고 이들보다 성능을 더 좋게 할 수 있는 보조적인 방법들과 raw image에 대한 성능을 평가한다.

  • PDF

Side-Channel Archive Framework Using Deep Learning-Based Leakage Compression (딥러닝을 이용한 부채널 데이터 압축 프레임 워크)

  • Sangyun Jung;Sunghyun Jin;Heeseok Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.379-392
    • /
    • 2024
  • With the rapid increase in data, saving storage space and improving the efficiency of data transmission have become critical issues, making the research on the efficiency of data compression technologies increasingly important. Lossless algorithms can precisely restore original data but have limited compression ratios, whereas lossy algorithms provide higher compression rates at the expense of some data loss. There has been active research in data compression using deep learning-based algorithms, especially the autoencoder model. This study proposes a new side-channel analysis data compressor utilizing autoencoders. This compressor achieves higher compression rates than Deflate while maintaining the characteristics of side-channel data. The encoder, using locally connected layers, effectively preserves the temporal characteristics of side-channel data, and the decoder maintains fast decompression times with a multi-layer perceptron. Through correlation power analysis, the proposed compressor has been proven to compress data without losing the characteristics of side-channel data.

A study on optimal Image Data Multiresolution Representation and Compression Through Wavelet Transform (Wavelet 변환을 이용한 최적 영상 데이터 다해상도 표현 및 압축에 관한 연구)

  • Kang, Gyung-Mo;Jeoung, Ki-Sam;Lee, Myoung-Ho
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1994 no.12
    • /
    • pp.31-38
    • /
    • 1994
  • This paper proposed signal decomposition and multiresolution representation through wavelet transform using wavelet orthonormal basis. And it suggested most appropriate filter for scaling function in multiresoltion representation and compared two compression method, arithmetic coding and Huffman coding. Results are as follows 1. Daub18 coefficient is most appropriate in computing time, energy compaction, image quality. 2. In case of image browsing that should be small in size and good for recognition, it is reasonable to decompose to 3 scale using pyramidal algorithm. 3. For the case of progressive transmittion where requires most grateful image reconstruction from least number of sampls or reconstruction at any target rate, I embedded the data in order of significance after scaling to 5 step. 4. Medical images such as information loss is fatal have to be compressed by lossless method. As a result from compressing 5 scaled data through arithmetic coding and Huffman coding, I obtained that arithmetic coding is better than huffman coding in processing time and compression ratio. And in case of arithmetic coding I could compress to 38% to original image data.

  • PDF