• Title/Summary/Keyword: Compression Algorithm

Search Result 1,096, Processing Time 0.027 seconds

Vector Map Data compression based on Douglas Peucker Simplification Algorithm and Bin Classification (Douglas Peucker 근사화 알고리즘과 빈 분류 기반 벡터 맵 데이터 압축)

  • Park, Jin-Hyeok;Jang, Bong Joo;Kwon, Oh Jun;Jeong, Jae-Jin;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.3
    • /
    • pp.298-311
    • /
    • 2015
  • Vector data represents a map by its coordinate and Raster data represents a map by its pixel. Since these data types have very large data size, data compression procedure is a compulsory process. This paper compare the results from three different methodologies; GIS (Geographic Information System) vector map data compression using DP(Douglas-Peucker) Simplification algorithm, vector data compression based on Bin classification and the combination between two previous methods. The results shows that the combination between the two methods have the best performance among the three tested methods. The proposed method can achieve 4-9% compression ratio while the other methods show a lower performance.

A Study on Header Compression Algorithm for the Effective Multimedia Transmission over Wireless Network (무선망에서 효율적인 멀티미디어 전송을 위한 헤더압축 알고리즘 연구)

  • Yun, Sung-Yeol;Park, Seok-Cheon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.2
    • /
    • pp.296-304
    • /
    • 2010
  • MoIP is technology to transmit a variety of multimedia over IP, but compared to traditional voice services require greater bandwidth and radio resources in a wireless environment has already reached the limits. Therefore, as a way to resolve this issue for header compression is a lot of research. SCTP protocol header compression using ROHC-SCTP has been research, ROHC-SCTP packet structure of the ROHC algorithm with different types and, SCTP header compression to apply the characteristics of the poor performance of many of these have drawbacks. Therefore, in this paper to solve these problems better header compression algorithm was designed. In this paper, the proposed algorithm to evaluate the NS-2 simulation environment was modeled on the header compression operation. Evaluation results, the algorithm designed in this paper compared to ROHC-SCTP algorithms determine the overhead rate was low, the data types vary a lot better when the total header size was small.

A New ROM Compression Method for Continuous Data (연속된 데이터를 위한 새로운 롬 압축 방식)

  • 양병도;김이섭
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.5
    • /
    • pp.354-360
    • /
    • 2003
  • A new ROM compression method for continuous data is proposed. The ROM compression method is based on two proposed ROM compression algorithms. The first one is a region select ROM compression algorithm that stores only regions including data after dividing data into many small regions by magnitude and address. The second is a quantization ROM and error ROM compression algorithm that divides data into quantized data and their errors. Using these algorithms, 40~60% ROM size reductions aye achieved for various continuous data.

PoW-BC: A PoW Consensus Protocol Based on Block Compression

  • Yu, Bin;Li, Xiaofeng;Zhao, He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1389-1408
    • /
    • 2021
  • Proof-of-Work (PoW) is the first and still most common consensus protocol in blockchain. But it is costly and energy intensive, aiming at addressing these problems, we propose a consensus algorithm named Proof-of-Work-and-Block-Compression (PoW-BC). PoW-BC is an improvement of PoW to compress blocks and adjust consensus parameters. The algorithm is designed to encourage the reduction of block size, which improves transmission efficiency and reduces disk space for storing blocks. The transaction optimization model and block compression model are proposed to compress block data with a smaller compression ratio and less compression/ decompression duration. Block compression ratio is used to adjust mining difficulty and transaction count of PoW-BC consensus protocol according to the consensus parameters adjustment model. Through experiment and analysis, it shows that PoW-BC improves transaction throughput, and reduces block interval and energy consumption.

A Study of Big Time Series Data Compression based on CNN Algorithm (CNN 기반 대용량 시계열 데이터 압축 기법연구)

  • Sang-Ho Hwang;Sungho Kim;Sung Jae Kim;Tae Geun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • In this paper, we implement a lossless compression technique for time-series data generated by IoT (Internet of Things) devices to reduce the disk spaces. The proposed compression technique reduces the size of the encoded data by selectively applying CNN (Convolutional Neural Networks) or Delta encoding depending on the situation in the Forecasting algorithm that performs prediction on time series data. In addition, the proposed technique sequentially performs zigzag encoding, splitting, and bit packing to increase the compression ratio. We showed that the proposed compression method has a compression ratio of up to 1.60 for the original data.

Improvement of OPW-TR Algorithm for Compressing GPS Trajectory Data

  • Meng, Qingbin;Yu, Xiaoqiang;Yao, Chunlong;Li, Xu;Li, Peng;Zhao, Xin
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.533-545
    • /
    • 2017
  • Massive volumes of GPS trajectory data bring challenges to storage and processing. These issues can be addressed by compression algorithm which can reduce the size of the trajectory data. A key requirement for GPS trajectory compression algorithm is to reduce the size of the trajectory data while minimizing the loss of information. Synchronized Euclidean distance (SED) as an important error measure is adopted by most of the existing algorithms. In order to further reduce the SED error, an improved algorithm for open window time ratio (OPW-TR) called local optimum open window time ratio (LO-OPW-TR) is proposed. In order to make SED error smaller, the anchor points are selected by calculating point's accumulated synchronized Euclidean distance (ASED). A variety of error metrics are used for the algorithm evaluation. The experimental results show that the errors of our algorithm are smaller than the existing algorithms in terms of SED and speed errors under the same compression ratio.

ECG Data Compression Using Adaptive Fractal Interpolation (적응 프랙탈 보간을 이용한 심전도 데이터 압축)

  • 전영일;윤영로
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.1
    • /
    • pp.121-128
    • /
    • 1996
  • This paper presents the ECG data compression method referred the adaptive fractal interpolation algorithm. In the previous piecewise fractal interpolation(PFI) algorithm, the size of range is fixed So, the reconstruction error of the PFI algorithm is nonuniformly distributed in the part of the original ECG signal. In order to improve this problem, the adaptive fractal interpolation(AEI) algorithm uses the variable range. If the predetermined tolerance was not satisfied, the range would be subdivided into two equal size blocks. large ranges are used for encoding the smooth waveform to yield high compression efficiency, and the smaller ranges are U for encoding rapidly varying parts of the signal to preserve the signal quality. The suggested algorithm was evaluated using MIT/BIH arrhythmia database. The AEI algorithm was found to yield a relatively low reconstruction error for a given compression ratio than the PFI algorithm. In applications where a PRD of about 7.13% was acceptable, the ASI algorithm yielded compression ratio as high as 10.51, without any entropy coding of the parameters of the fractal code.

  • PDF

The Compression of Normal Vectors to Prevent Visulal Distortion in Shading 3D Mesh Models (3D 메쉬 모델의 쉐이딩 시 시각적 왜곡을 방지하는 법선 벡터 압축에 관한 연구)

  • Mun, Hyun-Sik;Jeong, Chae-Bong;Kim, Jay-Jung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-7
    • /
    • 2008
  • Data compression becomes increasingly an important issue for reducing data storage spaces as well as transmis-sion time in network environments. In 3D geometric models, the normal vectors of faces or meshes take a major portion of the data so that the compression of the vectors, which involves the trade off between the distortion of the images and compression ratios, plays a key role in reducing the size of the models. So, raising the compression ratio when the normal vector is compressed and minimizing the visual distortion of shape model's shading after compression are important. According to the recent papers, normal vector compression is useful to heighten com-pression ratio and to improve memory efficiency. But, the study about distortion of shading when the normal vector is compressed is rare relatively. In this paper, new normal vector compression method which is clustering normal vectors and assigning Representative Normal Vector (RNV) to each cluster and using the angular deviation from actual normal vector is proposed. And, using this new method, Visually Undistinguishable Lossy Compression (VULC) algorithm which distortion of shape model's shading by angular deviation of normal vector cannot be identified visually has been developed. And, being applied to the complicated shape models, this algorithm gave a good effectiveness.

FDR Test Compression Algorithm based on Frequency-ordered (Frequency-ordered 기반 FDR 테스트패턴 압축 알고리즘)

  • Mun, Changmin;Kim, Dooyoung;Park, Sungju
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.106-113
    • /
    • 2014
  • Recently, to reduce test cost by efficiently compressing test patterns for SOCs(System-on-a-chip), different compression techniques have been proposed including the FDR(Frequency-directed run-length) algorithm. FDR is extended to EFDR(Extended-FDR), SAFDR(Shifted-Alternate-FDR) and VPDFDR(Variable Prefix Dual-FDR) to improve the compression ratio. In this paper, a frequency-ordered modification is proposed to further augment the compression ratios of FDR, EFDR, SAFRD and VPDFDR. The compression ratio can be maximized by using frequency-ordered method and consequently the overall manufacturing test cost and time can be reduced significantly.

Block Truncation Coding using Reduction Method of Chrominance Data for Color Image Compression (색차 데이터 축소 기법을 사용한 BTC (Block Truncation Coding) 컬러 이미지 압축)

  • Cho, Moon-Ki;Yoon, Yung-Sup
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.3
    • /
    • pp.30-36
    • /
    • 2012
  • block truncation coding(BTC) image compression is known as a simple and efficient technology for image compression algorithm. In this paper, we propose RMC-BTC algorithm(RMC : reduction method chrominace data) for color image compression. To compress chrominace data, in every BTC block, the RMC-BTC coding employs chrominace data expressed with average of chrominace data and using method of luminance data bit-map to represented chrominance data bit-map. Experimental results shows efficiency of proposed algorithm, as compared with PSNR and compression ratio of the conventional BTC method.