• Title/Summary/Keyword: DCT(coefficient)

Search Result 113, Processing Time 0.022 seconds

A New Steganographic Method with Minimum Distortion (최소 왜곡을 위한 새로운 스테가노그래피 방법)

  • Zhang, Rongyue;Md, Amiruzzaman;Kim, Hyoung-Joong
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2008.08a
    • /
    • pp.201-204
    • /
    • 2008
  • In this paper a new steganographic method is presented with minimum distortion. This paper focused on DCT rounding error and optimized that in a very easy way, resulting stego image has less distortion than other existing methods. The proposed method compared with F5 steganography algorithm, and the proposed method achieved better performance. This paper considered the DCT rounding error for lower distortion with possibly higher embedding capacity.

  • PDF

3D Image Coding Using DCT and Hierarchical Segmentation Vector Quantization (DCT와 계층 분할 벡터 양자화를 이용한 3차원 영상 부호화)

  • Cho Seong Hwan;Kim Eung Sung
    • Journal of Internet Computing and Services
    • /
    • v.6 no.2
    • /
    • pp.59-68
    • /
    • 2005
  • In this paper, for compression and transmission of 3D image, we propose an algorithm which executes 3D discrete cosine transform(DCT) for 3D images, hierarchically segments 3D blocks of an image in comparison with the original image and executes finite-state vector quantization(FSVQ) for each 3D block. Using 3D DCT coefficient feature, a 3D image is segmented hierarchically into large smooth blocks and small edge blocks, then the block hierarchy informations are transmitted. The codebooks are constructed for each hierarchical blocks respectively, the encoder transmits codeword index using FSVQ for reducing encoded bit with hierarchical segmentation information. The new algorithm suggested in this paper shows that the quality of Small Lobster and Head image increased by 1,91 dB and 1.47 dB respectively compared with those of HFSVQ.

  • PDF

The Study of Comparison of DCT-based H.263 Quantizer for Computative Quantity Reduction (계산량 감축을 위한 DCT-Based H.263 양자화기의 비교 연구)

  • Shin, Kyung-Cheol
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.3
    • /
    • pp.195-200
    • /
    • 2008
  • To compress the moving picture data effectively, it is needed to reduce spatial and temporal redundancy of input image data. While motion estimation! compensation methods is effectively able to reduce temporal redundancy but it is increased computation complexity because of the prediction between frames. So, the study of algorithm for computation reduction and real time processing is needed. This paper is presenting quantizer effectively able to quantize DCT coefficient considering the human visual sensitivity. As quantizer that proposed DCT-based H.263 could make transmit more frame than TMN5 at a same transfer speed, and it could decrease the frame drop effect. And the luminance signal appeared the difference of $-0.3{\sim}+0.65dB$ in the average PSNR for the estimation of objective image quality and the chrominance signal appeared the improvement in about 1.73dB in comparision with TMN5. The proposed method reduces $30{\sim}31%$ compared with NTSS and $20{\sim}21%$ compared to 4SS in comparition of calculation quantity.

  • PDF

Vector Quantization Codebook Design Using Unbalanced Binary Tree and DCT Coefficients (불균형 이진트리와 DCT 계수를 이용한 벡터양자화 코드북)

  • 이경환;최정현;이법기;정원식;김경규;김덕규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2342-2348
    • /
    • 1999
  • DCT-based codebook design using binary tree was proposed to reduce computation time and to solve the initial codebook problem. In this method, DCT coefficient of training vectors that has maximum variance is to be a split key and the mean of coefficients at the location is used as split threshold, then balanced binary tree for final codebook is formed. However edge degradation appears in the reconstructed image, since the blocks of shade region are frequently selected for codevector. In this paper, we propose DCT-based vector quantization codebook design using unbalanced binary tree. Above all, the node that has the largest split key is splited. So the number of edge codevector can be increased. From the simulation results, this method reconstructs the edge region sincerely and shows higher PSNR than previous methods.

  • PDF

A Memory-Efficient VLC Decoder Architecture for MPEG-2 Application

  • Lee, Seung-Joon;Suh, Ki-bum;Chong, Jong-wha
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.360-363
    • /
    • 1999
  • Video data compression is a major key technology in the field of multimedia applications. Variable-length coding is the most popular data compression technique which has been used in many data compression standards, such as JPEG, MPEG and image data compression standards, etc. In this paper, we present memory efficient VLC decoder architecture for MPEG-2 application which can achieve small memory space and higher throughput. To reduce the memory size, we propose a new grouping, remainder generation method and merged lookup table (LUT) for variable length decoders (VLD's). In the MPEG-2, the discrete cosine transform (DCT) coefficient table zero and one are mapped onto one memory whose space requirement has been minimized by using efficient memory mapping strategy The proposed memory size is only 256 words in spite of mapping two DCT coefficient tables.

  • PDF

DCT Coefficient Block Size Classification for Image Coding (영상 부호화를 위한 DCT 계수 블럭 크기 분류)

  • Gang, Gyeong-In;Kim, Jeong-Il;Jeong, Geun-Won;Lee, Gwang-Bae;Kim, Hyeon-Uk
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.3
    • /
    • pp.880-894
    • /
    • 1997
  • In this paper,we propose a new algorithm to perform DCT(Discrete Cosine Transform) withn the area reduced by prdeicting position of quantization coefficients to be zero.This proposed algorithm not only decreases the enoding time and the decoding time by reducing computation amount of FDCT(Forward DCT)and IDCT(Inverse DCT) but also increases comprossion ratio by performing each diffirent horizontal- vereical zig-zag scan assording to the calssified block size for each block on the huffiman coeing.Traditional image coding method performs the samd DCT computation and zig-zag scan over all blocks,however this proposed algorthm reduces FDCT computation time by setting to zero insted of computing DCT for quantization codfficients outside classfified block size on the encoding.Also,the algorithm reduces IDCT computation the by performing IDCT for only dequantization coefficients within calssified block size on the decoding.In addition, the algorithm reduces Run-Length by carrying out horizontal-vertical zig-zag scan approriate to the slassified block chraateristics,thus providing the improverment of the compression ratio,On the on ther hand,this proposed algorithm can be applied to 16*16 block processing in which the compression ratio and the image resolution are optimal but the encoding time and the decoding time take long.Also,the algorithm can be extended to motion image coding requirng real time processing.

  • PDF

Fast Intra Mode Decision for H.264/AVC by Using the Approximation of DCT Coefficient (H.264/AVC에서 DCT 계수의 근사화를 이용한 고속 인트라 모드 결정 기법)

  • La, Byeong-Du;Eom, Min-Young;Choe, Yoon-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.23-32
    • /
    • 2007
  • The H.264/AVC video coding standard uses rate distortion optimization (RDO) method to improve the compression performance in the intra prediction. The complexity and computational load are increased more than previous standard by using this method, even though this standard selects the best coding mode for the current macroblock. This paper proposes a fast intra mode decision algorithm for H.264/AVC encoder based on dominant edge direction (DED). To apply the idea, this algorithm uses the approximation of discrete cosine transform (DCT) coefficient. By detecting the DED, 3 modes instead of 9 modes are chosen for RDO calculation to decide the best mode in the $4{\times}4$ luma block. As for the $16{\times}16$ luma and $8{\times}8$ chroma block, instead of 4 modes, only 2 modes are searched. Experimental results show that the computation time of the proposed algorithm is decreased to about 72% of the full search method with negligible quality loss.

A Study on the Digital Signal Processing for Removing the Bottle-neck Effect (병목현상 제거를 위한 디지틀 신호처리에 관한 연구)

  • 고영욱;김성곤;김환용
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.1
    • /
    • pp.45-52
    • /
    • 1999
  • In this thesis, a packer is proposed and designed for removing the bottle-neck effect and easy signal processing using a new algorithm with the operation frequency of 54MHz. To verify the performance of the proposed packer, DCT coefficient encoding block with ROM table using a combinational logic is designed and its output data are used the input data of the packer. Circuits in this thesis are designed by using VHDL code and its modeling and simulation are performed by SYNOPSYS tool using $0.65{\mu}m$ rule.

  • PDF

Blind Watermarking System Based on the Modified DCT Coefficient (수정된 DCT 계수 기반의 Blind Watermarking 시스템의 하드웨어 설계 및 구현)

  • 윤승주;정진일;채봉수;조용범
    • Proceedings of the IEEK Conference
    • /
    • 2003.07b
    • /
    • pp.871-874
    • /
    • 2003
  • 본 논문에서는 디지털 데이터 내에 사용자의 정보나 저작권 정보를 나타내기 위해 삽입되는 워터마크를 추출할 때 원본 이미지나 개인 키를 필요로 하지 않는 Blind Watermarking 방법을 개선하였다. 기존의 워터마킹 방법에서는 워터마크를 추출하기 위해 원본 이미지를 사용하거나 원본 이미지를 사용하지 않는 경우에는 개인 키를 사용하여 워터마크를 추출하였다. 제안하는 워터마킹 알고리즘은 워터마크를 주파수 대역 별로 삽입하는 것으로써 수정된 DCT 계수를 기반으로 하였고, 삽입 및 추출 연산의 복잡성을 배제하여 속도가 빠르고 하드웨어의 구조가 간단하다. 또한, 워터마크를 저 주파수 대역과 고 주파수 대역에 삽입하여 압축 및 에러 환경에 강인한 성격을 가진다. 제안한 알고리즘의 FPGA와 PCI Interface 를 통한 구현 및 검증에 대해서도 논하였다.

  • PDF

Optimal Watermark Coefficient Extraction by Statistical Analysis of DCT Coefficients (DCT 계수의 통계적 분석을 통한 최적의 워터마크 계수 추출)

  • 최병철;김용철
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.69-72
    • /
    • 2000
  • In this paper, a novel algorithm for digital watermarking is proposed. We use two pattern keys from BCH (15, 7) code and one randomizing key. In the embedding process, optimal watermark coefficients are determined by statistical analysis of the DCT coefficients from the standpoint of HVS. In the detection, watermark coefficients are restored by correlation matching of the possible pattern keys and minimizing the estimation errors. Attacks tested in the experiments ate image enhancement and image compression (JPEG). Performance is evaluated by BER of the logo images and SNR/PSNR of the restored images. Our method has higher performance against JPEG attacks. Analysis for the performance is included.

  • PDF