• 제목/요약/키워드: Compression Algorithm

검색결과 1,096건 처리시간 0.022초

코딩 모드 영상 특성기반의 고속 직접모드 결정 알고리즘 (A Coding Mode Image Characteristics-based Fast Direct Mode Decision Algorithm)

  • 최영호;한수희;김낙교
    • 전기학회논문지
    • /
    • 제61권8호
    • /
    • pp.1199-1203
    • /
    • 2012
  • H.264 adopted many compression tools to increase image data compression efficiency such as B frame bi-directional predictions, the direct mode coding and so on. Despite its high compression efficiency, H.264 can suffer from its long coding time due to the complicated tools of H.264. To realize a high performance H.264, several fast algorithms were proposed. One of them is adaptive fast direct mode decision algorithm using mode and Lagrangian cost prediction for B frame in H.264/AVC (MLP) algorithm which can determine the direct coding mode for macroblocks without a complex mode decision process. However, in this algorithm, macroblocks not satisfying the conditions of the MLP algorithm are required to process the complex mode decision calculation, yet suffering a long coding time. To overcome the problem, this paper proposes a fast direct mode prediction algorithm. Simulation results show that the proposed algorithm can determine the direct mode coding without a complex mode decision process for 42% more macroblocks and, this algorithm can reduce coding time by up to 23%, compared with Jin's algorithm. This enables to encode B frames fast with a less quality degradation.

비트스트림의 구조 분석을 이용한 음성 부호화 방식 추정 기법 (Blind Classification of Speech Compression Methods using Structural Analysis of Bitstreams)

  • 유훈;박철순;박영미;김종호
    • 한국정보통신학회논문지
    • /
    • 제16권1호
    • /
    • pp.59-64
    • /
    • 2012
  • 본 논문에서는 임의의 음성 압축 비트스트림의 구조를 분석하여 음성 신호의 부호화 방식을 추정 및 분류하는 기법을 제안한다. 저 비트율 전송 및 저장을 위하여 다양한 보코더 방식의 음성 압축 기법이 개발되었는데, 이들은 블록 구조를 반드시 포함하고 있다. 각 부호화 방식을 구분하는데 있어, 본 논문에서는 Measure of Inter-Block Correlation (MIBC)를 이용하여 블록 구조의 유무 및 신호 블록의 길이를 파악하고, 블록 길이가 동일한 부호화 방식의 경우 각 부호화 방식마다 압축 스트림 내의 각 비트 위치별로 상관도 분포가 다르다는 점을 이용하여 해당 부호화 방식을 정확하게 추정하는 기법을 제안한다. 실험 결과 제안한 비트스트림 분석 기법은 다양한 음성 신호의 종류, 음성 신호의 길이 및 잡음 환경에 강인한 검출 능력을 나타냄을 보인다.

이산코사인변환 기반 이미지 압축 알고리즘에 관한 재구성 (Rebuilding of Image Compression Algorithm Based on the DCT (discrete cosine transform))

  • 남수태;진찬용
    • 한국정보통신학회논문지
    • /
    • 제23권1호
    • /
    • pp.84-89
    • /
    • 2019
  • JPEG은 가장 널리 사용되는 이미지 압축 표준 기술이다. 본 논문에서는 JPEG 이미지 압축 알고리즘을 소개하고 압축 및 압축 해제의 각 단계를 서술하고자 한다. 이미지 압축은 디지털 이미지를 데이터 압축을 적용하는 과정이다. 이산코사인변환은 시간 도메인에서 주파수 도메인으로 변환하는 기술이다. 먼저, 이미지는 8 by 8 픽셀 블록으로 분할하게 된다. 둘째, 위에서 아래로 왼쪽에서 오른쪽으로 진행하면서 DCT가 각각의 블록에 적용하게 된다. 셋째, 각 블록은 양자화를 통해 압축을 진행한다. 넷째, 이미지를 구성하는 압축된 블록의 행렬은 크게 줄어든 공간에 저장된다. 끝으로, 원하는 경우 이미지는 역이산코사인변환(IDCT)을 사용하는 프로세스인 압축 해제를 통해 재구성하게 된다. 본 연구에서는 이산코사인변환 기법을 이용해 이미지 압축/복원 및 재구성하는 것에 목적을 두고 있다.

임베디드시스템을 위한 혼용텍스트 파일의 개선된 LZW 압축 알고리즘 구현 (Development on Improved of LZW Compression Algorithm by Mixed Text File for Embedded System)

  • 조미남;지유강
    • 한국콘텐츠학회논문지
    • /
    • 제10권12호
    • /
    • pp.70-76
    • /
    • 2010
  • 최근의 스마트폰, 임베디드시스템 등의 정보통신 단말기는 데이터의 송 수신 및 분산처리 등의 업무를 수행하기 위하여 데이터의 크기를 축소시키는 압축률 향상이 매우 크게 대두되어졌다. 일반적으로 텍스트의 압축에는 LZW(Lempel Ziv Welch)알고리즘을 활용하고 있다. 그러나 LZW알고리즘은 1Byte 조합형 텍스트(알파벳 등)의 압축에는 효율적이나 2Byte 완성형 텍스트(한글 등)에 압축률이 현저하게 저하되는 단점을 가지고 있다. 이를 극복하기 위하여 본 논문에서는 2Byte 전위 필드(prefix)와 반복 계수를 위한 1Byte 후위 필드(suffix)를 사용하는 확장된 ELZW(EBCDIC Lempel Ziv Welch)알고리즘을 제안한다. 제안 알고리즘은 압축률 증가를 위해 압축사전을 구성하여, 알파벳, 한글, 포인터에 따라 각각 서로 다른 비트 스트링으로 적절하게 패킹된다. 제안하는 알고리즘의 성능분석을 위하여 각 140,355byte의 영문, 한글, 한영혼용 텍스트를 비교 실험하였고, 실험결과 제안한 ELZW알고리즘의 압축률은 기존의 1Byte 방식의 LZW 알고리즘보다 5.22% 더 우수하고, 2Byte LZW 알고리즘 보다 8.96% 더 우수함을 보였다.

QP(Quadratic Programming) 방법을 이용한 객체단위의 영상압축 알고리즘 (Object Based Image Compression Using QP (Quadratic Programming) Method)

  • 최유태;이상엽;곽대호;김시내;송문호
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 추계종합학술대회 논문집(4)
    • /
    • pp.175-178
    • /
    • 2000
  • The object level image compression is a useful technology for reducing the necessary data and manipulating individual objects. In this paper, we propose a new image object compression algorithm that uses the quadratic programming (QP) method to reduce the compressed data. The results indicate the superiority of the proposed QP based algorithm over the low pass extrapolation (LPE) method of MPEG-4.

  • PDF

Segmented Douglas-Peucker Algorithm Based on the Node Importance

  • Wang, Xiaofei;Yang, Wei;Liu, Yan;Sun, Rui;Hu, Jun;Yang, Longcheng;Hou, Boyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권4호
    • /
    • pp.1562-1578
    • /
    • 2020
  • Vector data compression algorithm can meet requirements of different levels and scales by reducing the data amount of vector graphics, so as to reduce the transmission, processing time and storage overhead of data. In view of the fact that large threshold leading to comparatively large error in Douglas-Peucker vector data compression algorithm, which has difficulty in maintaining the uncertainty of shape features and threshold selection, a segmented Douglas-Peucker algorithm based on node importance is proposed. Firstly, the algorithm uses the vertical chord ratio as the main feature to detect and extract the critical points with large contribution to the shape of the curve, so as to ensure its basic shape. Then, combined with the radial distance constraint, it selects the maximum point as the critical point, and introduces the threshold related to the scale to merge and adjust the critical points, so as to realize local feature extraction between two critical points to meet the requirements in accuracy. Finally, through a large number of different vector data sets, the improved algorithm is analyzed and evaluated from qualitative and quantitative aspects. Experimental results indicate that the improved vector data compression algorithm is better than Douglas-Peucker algorithm in shape retention, compression error, results simplification and time efficiency.

An Efficient DNA Sequence Compression using Small Sequence Pattern Matching

  • Murugan., A;Punitha., K
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.281-287
    • /
    • 2021
  • Bioinformatics is formed with a blend of biology and informatics technologies and it employs the statistical methods and approaches for attending the concerning issues in the domains of nutrition, medical research and towards reviewing the living environment. The ceaseless growth of DNA sequencing technologies has resulted in the production of voluminous genomic data especially the DNA sequences thus calling out for increased storage and bandwidth. As of now, the bioinformatics confronts the major hurdle of management, interpretation and accurately preserving of this hefty information. Compression tends to be a beacon of hope towards resolving the aforementioned issues. Keeping the storage efficiently, a methodology has been recommended which for attending the same. In addition, there is introduction of a competent algorithm that aids in exact matching of small pattern. The DNA representation sequence is then implemented subsequently for determining 2 bases to 6 bases matching with the remaining input sequence. This process involves transforming of DNA sequence into an ASCII symbols in the first level and compress by using LZ77 compression method in the second level and after that form the grid variables with size 3 to hold the 100 characters. In the third level of compression, the compressed output is in the grid variables. Hence, the proposed algorithm S_Pattern DNA gives an average better compression ratio of 93% when compared to the existing compression algorithms for the datasets from the UCI repository.

인트라 매크로블록의 휘도성분 분산을 이용한 압축률 향상 (The development of improve the compression ratio through the variance of the luminance component of the intra macroblock)

  • 김준;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제12권1호
    • /
    • pp.35-39
    • /
    • 2013
  • H.264/AVC is an authoritative international video coding standard which shows code and efficiency more improved than the existing video standards. Above all, the parameter block mode of H.264/AVC significantly contributes much to high compression efficiency. However, as the occasion demands, users tend to pass while overlooking the part that can produce a little higher compression efficiency. We, taking notice of this point, are designed to bring in much higher compression efficiency by gathering up the overlooked parts. This paper suggests the algorithm that produces efficient performance improvement by using the histogram of luminance in the pixel unit (Macroblock) of respective prediction block and applying specific thresholds. The experimental results proves that the technique proposed by this paper increases the compression efficiency of the existing H.264/AVC algorithm by 0.4% without any increase in the whole encoding time and PSNR complexity.

2바이트 코드워드 표현방법에 의한 자료압축 알고리듬 (Data compression algorithm with two-byte codeword representation)

  • 양영일;김도현
    • 전자공학회논문지C
    • /
    • 제34C권3호
    • /
    • pp.23-36
    • /
    • 1997
  • In tis paper, sthe new data model for the hardware implementation of lempel-ziv compression algorithm was proposed. Traditional model generates the codeword which consists of 3 bytes, the last symbol, the position and the matched length. MSB (most significant bit) of the last symbol is the comparession flag and the remaining seven bits represent the character. We confined the value of the matched length to 128 instead of 256, which can be coded with seven bits only. In the proposed model, the codeword consists of 2 bytes, the merged symbol and the position. MSB of the merged symbol is the comression flag. The remaining seven bits represent the character or the matched length according to the value of the compression flag. The proposed model reduces the compression ratio by 5% compared with the traditional model. The proposed model can be adopted to the existing hardware architectures. The incremental factors of the compression ratio are also analyzed in this paper.

  • PDF

Fast 3D Mesh Compression Using Shared Vertex Analysis

  • Jang, Euee-Seon;Lee, Seung-Wook;Koo, Bon-Ki;Kim, Dai-Yong;Son, Kyoung-Soo
    • ETRI Journal
    • /
    • 제32권1호
    • /
    • pp.163-165
    • /
    • 2010
  • A trend in 3D mesh compression is codec design with low computational complexity which preserves the input vertex and face order. However, this added information increases the complexity. We present a fast 3D mesh compression method that compresses the redundant shared vertex information between neighboring faces using simple first-order differential coding followed by fast entropy coding with a fixed length prefix. Our algorithm is feasible for low complexity designs and maintains the order, which is now part of the MPEG-4 scalable complexity 3D mesh compression standard. The proposed algorithm is 30 times faster than MPEG-4 3D mesh coding extension.