• Title/Summary/Keyword: CODEBOOK

Search Result 346, Processing Time 0.029 seconds

Vector Quantization for Medical Image Compression Based on DCT and Fuzzy C-Means

  • Supot, Sookpotharom;Nopparat, Rantsaena;Surapan, Airphaiboon;Manas, Sangworasil
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.285-288
    • /
    • 2002
  • Compression of magnetic resonance images (MRI) has proved to be more difficult than other medical imaging modalities. In an average sized hospital, many tora bytes of digital imaging data (MRI) are generated every year, almost all of which has to be kept. The medical image compression is currently being performed by using different algorithms. In this paper, Fuzzy C-Means (FCM) algorithm is used for the Vector Quantization (VQ). First, a digital image is divided into subblocks of fixed size, which consists of 4${\times}$4 blocks of pixels. By performing 2-D Discrete Cosine Transform (DCT), we select six DCT coefficients to form the feature vector. And using FCM algorithm in constructing the VQ codebook. By doing so, the algorithm can make good time quality, and reduce the processing time while constructing the VQ codebook.

  • PDF

A Comparison of Discrete and Continuous Hidden Markov Models for Korean Digit Recognition (한국어 숫자음 인식을 위한 이산분포 HMM과 연속분포 HMM의 성능 비교 연구)

  • 홍형진
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06c
    • /
    • pp.157-160
    • /
    • 1994
  • 본 논문에서는 한국어 숫자음 인식에 대한 이산분포 HMM과 연속분포 HMM의 인식 성능을 비교하였다. 일반적으로 연속분포 HMM은 많은 계산량이 필요하고, 학습시 초기값이 매우 민감하다는 단점이 있지만, 이산분포 HMM의 VQ로 인한 왜곡을 제거함으로써 인식률을 향상시킬 수 있다. 여기서는 성능비교를 위해서 mel-cepstrum의 분석차수, 이산분포 HMM의 codebook 크기, 연속분포 HMM의 miture 개수등에 따른 인식성능을 비교하였다. 실험 결과 이산분포 HMM에서는 mel-cepstrum 벡터가 14차이고, codebook 크기가 64일 때 가장 좋은 성능을 나타냈으며, 연속부포 HMM에서는 mel-cepstrum 벡터가 16차이고 miture가 3개일 때 가장 좋은 결과를 얻을 수 있었다. 특히 학습 데이터의 양이 적은 경우에는 연속분포 HMM이 이산분포 HMM보다 더 좋은 인식률을 나타내었다.

  • PDF

An Image Compression Technique with Lifting Scheme and PVQ (Lifting Scheme과 PVQ를 이용한 영상압축 기법)

  • 정전대;김학렬;신재호
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06a
    • /
    • pp.159-163
    • /
    • 1996
  • In this paper, a new image compression technique, which uses lifting scheme and pyramid vector quantization, is proposed. Lifting scheme is a new technique to generate wavelets and to perform wavelet transform, and pyramid vector quantization is a kind of vector quantization which dose not have codebook neither codebook generation algorithm. For the purpose of realizing more compression rate, an arithmetic entropy coder is used. Proposed algorithm is compared with other wavelet based image coder and with JPEG which uses DCT and adaptive Huffman entropy coder. Simulation results showed that the performance of proposed algorithm is much better than that of others in point of PSNR and bpp.

  • PDF

A study on the Image Signal Compress using SOM with Isometry (Isometry가 적용된 SOM을 이용한 영상 신호 압축에 관한 연구)

  • Chang, Hae-Ju;Kim, Sang-Hee;Park, Won-Woo
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.358-360
    • /
    • 2004
  • The digital images contain a significant amount of redundancy and require a large amount of data for their storage and transmission. Therefore, the image compression is necessary to treat digital images efficiently. The goal of image compression is to reduce the number of bits required for their representation. The image compression can reduce the size of image data using contractive mapping of original image. Among the compression methods, the mapping is affine transformation to find the block(called range block) which is the most similar to the original image. In this paper, we applied the neural network(SOM) in encoding. In order to improve the performance of image compression, we intend to reduce the similarities and unnecesaries comparing with the originals in the codebook. In standard image coding, the affine transform is performed with eight isometries that used to approximate domain blocks to range blocks.

  • PDF

Novelty Detection using SOM-based Methods (자기구성지도 기반 방법을 이용한 이상 탐지)

  • Lee, Hyeong-Ju;Jo, Seong-Jun
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2005.05a
    • /
    • pp.599-606
    • /
    • 2005
  • Novelty detection involves identifying novel patterns. They are not usually available during training. Even if they are, the data quantity imbalance leads to a low classification accuracy when a supervised learning scheme is employed. Thus, an unsupervised learning scheme is often employed ignoring those few novel patterns. In this paper, we propose two ways to make use of the few available novel patterns. First, a scheme to determine local thresholds for the Self Organizing Map boundary is proposed. Second, a modification of the Learning Vector Quantization learning rule is proposed so that allows one to keep codebook vectors as far from novel patterns as possible. Experimental results are quite promising.

  • PDF

Improved Wavelet Image Compression Using Correlation of VQ index (VQ 인덱스의 상관도를 이용한 향상된 웨이브렛 영상 압축)

  • Hwang, Jae-Ho;Hong, Chung-Seon;Lee, Dae-Yeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.6
    • /
    • pp.1956-1963
    • /
    • 2000
  • In this paper, a wavelet image coding scheme exploiting the correlation of neighboring VQ indices in eh wavelet domain is proposed. the codewords in each sub-codebook are re-ordered in terms of their energies in order to increase the correlation of he indices. Then, the generated indices after VQ can be further encoded by non-adaptive DPCM/Huffman method. LBG algorithm and a fast-PNN algorithm using k-d trees are used for generating a multiresolution codebook. Experimental results show that or scheme outperforms the ordinary wavelet VQ and JPEG at low bit rates.

  • PDF

Modified K-means Algorithm (수정된 K-means 알고리즘)

  • 조제황
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.7
    • /
    • pp.23-27
    • /
    • 2000
  • We provide an useful method to design codebooks with better performance than conventional methods. In the proposed method, new codevectors obtained from learning iterations are not the centroid vectors which are the representatives of partitions, but the vectors manipulated by the distance between new codevectors and old codevectors in the early stages of learning iteration. Experimental results show that the codevectors obtained by the proposed method converge to a locally better optimal codebook.

  • PDF

A Simple Algorithm for Fast Codebook Search in Image Vector Quantization (벡터 양자화에서 벡터의 특성을 이용한 단축 탐색방법)

  • Koh, Jong-Seog;Kim, Jae-Kyoon;Kim, Seong-Dae
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1434-1437
    • /
    • 1987
  • We present a very simple algorithm for reducing the encoding (codebook search) complexity of vector quantization (VQ), exploiting some features of a vector currently being encoded. A proposed VQ of 16 (=$4{\times}4$) vector dimension and 256 codewords shows a slight performance degradation of about 0.1-0.9 dB, however, with only 16 or 32 among 256 codeword searches, i.e., with just 1/16 or 1/8 search complexity compared to a full-search VQ. And the proposed VQ scheme is also compared to and shown to be a bit superior to tree-search VQ with regard to their SNR performance and memory requirement.

  • PDF

Adaptive subband vector quantization using motion vector (움직임 벡터를 이용한 적응적 부대역 벡터 양자화)

  • 이성학;이법기;이경환;김덕규
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.677-680
    • /
    • 1998
  • In this paper, we proposed a lwo bit rate subband coding with adaptive vector quantization using the correlation between motion vector and block energy in subband. In this method, the difference between the input signal and the motion compensated interframe prediction signal is decomposed into several narrow bands using quadrature mirror filter (QMF) structure. The subband signals are then quantized by adaptive vector quantizers. In the codebook generating process, each classified region closer to the block value in the same region after the classification of region by the magnitude of motion vector and the variance values of subband block. Because codebook is genrated considering energy distribution of each region classified by motion vector and variance of subband block, this technique gives a very good visual quality at low bit rate coding.

  • PDF

A Packet Loss Concealment Algorithm Based on Multiple Adaptive Codebooks Using Comfort Noise (Comfort Noise를 이용한 다중 적응 코드북 기반 패킷 손실 은닉 알고리즘)

  • Park, Nam-In;Kim, Hong-Kook
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.873-874
    • /
    • 2008
  • In this paper, we propose a packet loss concealment (PLC) algorithm for CELP speech coders, which is based on multiple adaptive codebooks by using comfort noise for the lost packet recovery. The multiple adaptive codebooks are composed of a conventional adaptive codebook to model periodic excitation of speech and another adaptive codebook to provide a better estimate of excitation when packets are lost in the speech onset region. The performance of the proposed PLC algorithm is evaluated by implementing it into the G.729 decoder and compared with that of the PLC algorithm employed in the G.729 decoder by means of perceptual evaluation of speech quality (PESQ). It is shown from the experiments under different burstiness of packet loss rates of 3% and 5% that the proposed PLC algorithm provides higher PESQ scores than the G.729 PLC algorithm.

  • PDF