• Title/Summary/Keyword: Lempel-Ziv

Search Result 28, Processing Time 0.026 seconds

An efficient Hardware Architecture of Lempel-Ziv Compressor for Real Time Data Compression (실시간 데이터 압축을 위한 Lempel-Ziv 압축기의 효과적인 구조의 제안)

  • 진용선;정정화
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.37-44
    • /
    • 2000
  • In this paper, an efficient hardware architecture of Lempel-Ziv compressor for real time data compression is proposed. The accumulated shift operations in the Lempel-Ziv algorithm are the major problem, because many shift operations are needed to prepare a dictionary buffer and matching symbols. A new efficient architecture for the fast processing of Lempel-Ziv algorithm is presented in this paper. In this architecture, the optimization technique for dictionary size, a new comparing method of multi symbol and a rotational FIFO structure are used to control shift operations easily. For the functional verification, this architecture was modeled by C programming language, and its operation was verified by running on commercial DSP processor. Also, the design of overall architecture in VHDL was synthesized on commercial FPGA chip. The result of critical path analysis shows that this architecture runs well at the input bit rate of 256kbps with 33MHz clock frequency.

  • PDF

DEM_Comp Software for Effective Compression of Large DEM Data Sets (대용량 DEM 데이터의 효율적 압축을 위한 DEM_Comp 소프트웨어 개발)

  • Kang, In-Gu;Yun, Hong-Sik;Wei, Gwang-Jae;Lee, Dong-Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.265-271
    • /
    • 2010
  • This paper discusses a new software package, DEM_Comp, developed for effectively compressing large digital elevation model (DEM) data sets based on Lempel-Ziv-Welch (LZW) compression and Huffman coding. DEM_Comp was developed using the $C^{++}$ language running on a Windows-series operating system. DEM_Comp was also tested on various test sites with different territorial attributes, and the results were evaluated. Recently, a high-resolution version of the DEM has been obtained using new equipment and the related technologies of LiDAR (LIght Detection And Radar) and SAR (Synthetic Aperture Radar). DEM compression is useful because it helps reduce the disk space or transmission bandwidth. Generally, data compression is divided into two processes: i) analyzing the relationships in the data and ii) deciding on the compression and storage methods. DEM_Comp was developed using a three-step compression algorithm applying a DEM with a regular grid, Lempel-Ziv compression, and Huffman coding. When pre-processing alone was used on high- and low-relief terrain, the efficiency was approximately 83%, but after completing all three steps of the algorithm, this increased to 97%. Compared with general commercial compression software, these results show approximately 14% better performance. DEM_Comp as developed in this research features a more efficient way of distributing, storing, and managing large high-resolution DEMs.

A Lossless Vector Data Compression Using the Hybrid Approach of BytePacking and Lempel-Ziv in Embedded DBMS (임베디드 DBMS에서 바이트패킹과 Lempel-Ziv 방법을 혼합한 무손실 벡터 데이터 압축 기법)

  • Moon, Gyeong-Gi;Joo, Yong-Jin;Park, Soo-Hong
    • Spatial Information Research
    • /
    • v.19 no.1
    • /
    • pp.107-116
    • /
    • 2011
  • Due to development of environment of wireless Internet, location based services on the basis of spatial data have been increased such as real time traffic information as well as CNS(Car Navigation System) to provide mobile user with route guidance to the destination. However, the current application adopting the file-based system has limitation of managing and storing the huge amount of spatial data. In order to supplement this challenge, research which is capable of managing large amounts of spatial data based on embedded database system is surely demanded. For this reason, this study aims to suggest the lossless compression technique by using the hybrid approach of BytePacking and Lempel-Ziv which can be applicable in DBMS so as to save a mass spatial data efficiently. We apply the proposed compression technique to actual the Seoul and Inchcon metropolitan area and compared the existing method with suggested one using the same data through analyzing the query processing duration until the reconstruction. As a result of comparison, we have come to the conclusion that suggested technique is far more performance on spatial data demanding high location accuracy than the previous techniques.

Channel Allocation Using Mobile Mobility and Neural Net Spectrum Hole Prediction in Cellular-Based Wireless Cognitive Radio Networks (셀룰러 기반 무선 인지망에서 모바일 이동성과 신경망 스펙트럼 홀 예측에 의한 채널할당)

  • Lee, Jin-yi
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.4
    • /
    • pp.347-352
    • /
    • 2017
  • In this paper, we propose a method that reduces mobile user's handover call dropping probability by using cognitive radio technology(CR) in cellular - based wireless cognitive radio networks. The proposed method predicts a cell to visit by Ziv-Lempel algorithm, and then supports mobile user with prediction of spectrum holes based on CR technology when allocated channels are short in the cell. We make neural network predict spectrum hole resources, and make handover calls use the resources before initial calls. Simulation results show CR technology has the capability to reduce mobile user handover call dropping probability in cellular mobile communication networks.

Nonlinear Quality Indices Based on a Novel Lempel-Ziv Complexity for Assessing Quality of Multi-Lead ECGs Collected in Real Time

  • Zhang, Yatao;Ma, Zhenguo;Dong, Wentao
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.508-521
    • /
    • 2020
  • We compared a novel encoding Lempel-Ziv complexity (ELZC) with three common complexity algorithms i.e., approximate entropy (ApEn), sample entropy (SampEn), and classic Lempel-Ziv complexity (CLZC) so as to determine a satisfied complexity and its corresponding quality indices for assessing quality of multi-lead electrocardiogram (ECG). First, we calculated the aforementioned algorithms on six artificial time series in order to compare their performance in terms of discerning randomness and the inherent irregularity within time series. Then, for analyzing sensitivity of the algorithms to content level of different noises within the ECG, we investigated their change trend in five artificial synthetic noisy ECGs containing different noises at several signal noise ratios. Finally, three quality indices based on the ELZC of the multi-lead ECG were proposed to assess the quality of 862 real 12-lead ECGs from the MIT databases. The results showed the ELZC could discern randomness and the inherent irregularity within six artificial time series, and also reflect content level of different noises within five artificial synthetic ECGs. The results indicated the AUCs of three quality indices of the ELZC had statistical significance (>0.500). The ELZC and its corresponding three indices were more suitable for multi-lead ECG quality assessment than the other three algorithms.

Performance Improvement of LZ77 Algorithm using a Strategy Table and a Genetic Algorithm (전략 테이블과 유전 알고리즘을 이용한 LZ77 알고리즘의 성능 개선)

  • Jung Soonchul;Seo Dong-Il;Moon Byung-Ro
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.12
    • /
    • pp.1628-1636
    • /
    • 2004
  • Data compression techniques have been studied for decades because they saved space and time to reduce costs. The Lempel-Ziv 77 (LZ77) is a dictionary-based, lossless compression algorithm. The dictionary size of the LZ77 algorithm is fixed, and the performance of the algorithm is highly dependent on its dictionary size. In this paper, we suggest a dynamic LZ77 algorithm that changes its dictionary size during compression, and also we suggest a genetic algorithm to evolve the dictionary-resizing strategies. The suggested algorithm outperformed the original version up to about 16%.

Finding the longest match in data compression using suffix trees (접미사 트리를 이용한 압축 기법에서 가장 긴 매치 찾기)

  • 나중채;박근수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10a
    • /
    • pp.658-660
    • /
    • 1999
  • Ziv-Lempel 코딩 방식은 문자열이 반복해서 나올 때 뒤에 나오는 문자열을 앞에 나온 문자열에 대한 포인터로 대칭시킴으로써 압축을 한다. 따라서 이 방식을 위해서는 앞서 나온 문자열을 유지하는 사전과 문자열 매칭이 필수적이다. 그래서 이 두 가지에 효율적인 자료구조인 접미사 트리를 Ziv-Lempel 코딩 방식에 적용시키려고, 그 이후에 Fiala, Greene와 Larsson은 각각 McCreight와 Ukkonen의 접미사 트리 생성 알고리즘을 LZ77 코딩에 이용하였다. 접미사 트리를 이용한 Zv-Lempel 코딩에는 만들어진 사전, 즉 접미사 트리와 앞으로 압축될 문자열과의 가장 긴 매치는 찾는 과정이 있다. 이는 단순히 접미사 트리의 루트부터 차례로 검색해 나가도 되지만 이렇게 했을 때 걸리는 시간은 노드에서 자식을 찾는데 걸리는 분기 결정 시간에 의해 좌우된다. 즉 분기에 성형 시간 이상이 걸리면 가장 긴 매치를 찾는데도 역시 선형 시간 이상이 걸린다. 게다가 이 방법은 자기 중복(self-overlapping)의 이점을 살릴 수가 없다. Rodeh, Pratt와 Even은 McCreight의 생성 알고리즘을 이용할 때 가장 긴 매치를 바로 찾을 수 있다는 것을 발견했다. 그러나 Ukkonend의 알고리즘에 대해서는 아직 이러한 방법이 알려지지 않았다. 본 논문에서는 Ukkonen의 알고리즘에 몇가지 작업을 추가하여 전체적으로 선형시간안에 가장 긴 매치를 찾는 방법을 소개한다.

  • PDF

An Efficient Bit-Level Lossless Grayscale Image Compression Based on Adaptive Source Mapping

  • Al-Dmour, Ayman;Abuhelaleh, Mohammed;Musa, Ahmed;Al-Shalabi, Hasan
    • Journal of Information Processing Systems
    • /
    • v.12 no.2
    • /
    • pp.322-331
    • /
    • 2016
  • Image compression is an essential technique for saving time and storage space for the gigantic amount of data generated by images. This paper introduces an adaptive source-mapping scheme that greatly improves bit-level lossless grayscale image compression. In the proposed mapping scheme, the frequency of occurrence of each symbol in the original image is computed. According to their corresponding frequencies, these symbols are sorted in descending order. Based on this order, each symbol is replaced by an 8-bit weighted fixed-length code. This replacement will generate an equivalent binary source with an increased length of successive identical symbols (0s or 1s). Different experiments using Lempel-Ziv lossless image compression algorithms have been conducted on the generated binary source. Results show that the newly proposed mapping scheme achieves some dramatic improvements in regards to compression ratios.

Impacts of Non-Uniform Source on BER for SSC NOMA (Part I): Optimal MAP Receiver's Perspective

  • Chung, Kyuhyuk
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.39-47
    • /
    • 2021
  • Lempel-Ziv coding is one of the most famous source coding schemes. The output of this source coding is usually a non-uniform code, which requires additional source coding, such as arithmetic coding, to reduce a redundancy. However, this additional source code increases complexity and decoding latency. Thus, this paper proposes the optimal maximum a-posteriori (MAP) receiver for non-uniform source non-orthogonal multiple access (NOMA) with symmetric superposition coding (SSC). First, we derive an analytical expression of the bit-error rate (BER) for non-uniform source NOMA with SSC. Then, Monte Carlo simulations demonstrate that the BER of the optimal MAP receiver for the non-uniform source improves slightly, compared to that of the conventional receiver for the uniform source. Moreover, we also show that the BER of an approximate analytical expression is in a good agreement with the BER of Monte Carlo simulation. As a result, the proposed optimal MAP receiver for non-uniform source could be a promising scheme for NOMA with SSC, to reduce complexity and decoding latency due to additional source coding.