• Title/Summary/Keyword: SPIHT algorithm

Search Result 37, Processing Time 0.018 seconds

EXTRACTION OF WATERMARKS BASED ON INDEPENDENT COMPONENT ANALYSIS

  • Thai, Hien-Duy;Zensho Nakao;Yen- Wei Chen
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.407-410
    • /
    • 2003
  • We propose a new logo watermark scheme for digital images which embed a watermark by modifying middle-frequency sub-bands of wavelet transform. Independent component analysis (ICA) is introduced to authenticate and copyright protect multimedia products by extracting the watermark. To exploit the Human visual system (HVS) and the robustness, a perceptual model is applied with a stochastic approach based on noise visibility function (NVF) for adaptive watermarking algorithm. Experimental results demonstrated that the watermark is perfectly extracted by ICA technique with excellent invisibility, robust against various image and digital processing operators, and almost all compression algorithms such as Jpeg, jpeg 2000, SPIHT, EZW, and principal components analysis (PCA) based compression.

  • PDF

A study on the bit-plane coding improvement of EBCOT algorithm (EBCOT 알고리즘의 bit-plane 부호화 개선에 대한 연구)

  • 이호석
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.281-283
    • /
    • 2000
  • 본 논문은 EBCOT 알고리즘의 소개와 개선 방법을 제안한다. EBCOT 알고리즘은 웨이블릿 변환과 블록기반 bit-plane 부호화 방법을 활용한 알고리즘이다. EBCOT에서 사용하는 bit-plane 부호화 방법을 블록기반 fractional bit-plane 방법이라고 한다. 이 방법은 bit-plane 전체를 한번에 부호화하는 것이 아니라 블록으로 나누어 부호화를 수행하고 또한 하나의 bit-plane에 대하여서도 4번의 pass를 거치면서 bit의 context에 따라서 부호화를 수행한다. EBCOT는 웨이블릿 변환에 의하여 resolution 스케일러빌리티를 지원하고 fractional bit-plane 부호화에 의하여 SNR 스케일러빌리티를 지원하며 블록기반 부호화에 의하여 ROI에 대한 random 접근 기능을 지원한다. 그리고 EBCOT는 부호화가 완료된 다음에 bit reduction 과정을 수행한다. 이러한 특징들은 이전의 EZW나 SPIHT 방법에 비하여 장점들이라고 할 수 있다. 그러나 bit-plane 부호화를 수행하는 과정에서 효율을 개선할 수 있으며 본 논문은 이에 대한 방법을 제안한다.

  • PDF

A Study on the Data Compression Algorithm for Just-in-Time Rendering of Concentric Mosaic (동심원 모자이크의 실시간 표현을 위한 데이터 압축 알고리즘에 관한 연구)

  • Jee, Inn-Ho;Ahn, Hong-Yeoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.91-96
    • /
    • 2010
  • Concentric mosaics are made with arranging and summing of video frames by using common spacial standards. Compared with previous works on 3-D wavelet transform coding, we have made important design considerations to enable flexible partial decoding and bit-stream random access. A just-in-time(JIT) rendering engine of the compressed concentric mosaic is developed. However, computationally, it is still demanding to accomplish the real-time rendering. Only the contents for specific scene representation are need to be decoded by maintaining compressed data. Thus our proposed algorithm is able to render real concentric mosaic by using lifting scheme instead of wavelet transform.

Embedded Compression Codec Algorithm for Motion Compensated Wavelet Video Coding System (움직임 보상된 웨이블릿 기반의 비디오 코딩 시스템에 적용 가능한 임베디드 압축 코덱 알고리즘)

  • Kim, Song-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.3
    • /
    • pp.77-83
    • /
    • 2012
  • In this paper, a low-complexity embedded compression (EC) Codec algorithm for the wavelet video coder is applied to reduce excessive external memory requirements. The EC algorithm is used to achieve a fixed compression ratio of 50 % under the near-lossless-compression constraint. The EC technique can reduce the 50 % memory requirement for intermediate low-frequency coefficients during multiple discrete wavelet transform stages compared with direct implementation of the wavelet video encoder of this paper. Furthermore, the EC scheme based on a forward adaptive quantization and fixed length coding can save bandwidth and size of buffer between DWT and SPIHT to 50 %. Simulation results show that our EC algorithm present only PSNR degradation of 0.179 and 0.162 dB in average when the target bit-rate of the video coder are 1 and 0.5 bpp, respectively.

Wavelet Packet Image Coder Using Coefficients Partitioning For Remote Sensing Images (위성 영상을 위한 계수분할 웨이블릿 패킷 영상 부호화 알고리즘에 관한 연구)

  • 한수영;조성윤
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.6
    • /
    • pp.359-367
    • /
    • 2002
  • In this paper, a new embedded wavelet packet image coder algorithm is proposed for an effective image coder using correlation between partitioned coefficients. This new algorithm presents parent-child relationship for reducing image reconstruction error using relations between individual frequency sub-bands. By parent-child relationship, every coefficient is partitioned and encoded for the zerotree data structure. It is shown that the proposed wavelet packet image coder algorithm achieves low bit rates and rate-distortion. It also demonstrates higher PSNR under the same bit rate and an improvement in image compression time. The perfect rate control is compared with the conventional method. These results show that the encoding and decoding processes of the proposed coder are simpler and more accurate than the conventional ones for texture images that include many mid and high-frequency elements such as aerial and satellite photograph images. The experimental results imply the possibility that the proposed method can be applied to real-time vision system, on-line image processing and image fusion which require smaller file size and better resolution.

New Medical Image Fusion Approach with Coding Based on SCD in Wireless Sensor Network

  • Zhang, De-gan;Wang, Xiang;Song, Xiao-dong
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.6
    • /
    • pp.2384-2392
    • /
    • 2015
  • The technical development and practical applications of big-data for health is one hot topic under the banner of big-data. Big-data medical image fusion is one of key problems. A new fusion approach with coding based on Spherical Coordinate Domain (SCD) in Wireless Sensor Network (WSN) for big-data medical image is proposed in this paper. In this approach, the three high-frequency coefficients in wavelet domain of medical image are pre-processed. This pre-processing strategy can reduce the redundant ratio of big-data medical image. Firstly, the high-frequency coefficients are transformed to the spherical coordinate domain to reduce the correlation in the same scale. Then, a multi-scale model product (MSMP) is used to control the shrinkage function so as to make the small wavelet coefficients and some noise removed. The high-frequency parts in spherical coordinate domain are coded by improved SPIHT algorithm. Finally, based on the multi-scale edge of medical image, it can be fused and reconstructed. Experimental results indicate the novel approach is effective and very useful for transmission of big-data medical image(especially, in the wireless environment).

3-D Wavelet Compression with Lifting Scheme for Rendering Concentric Mosaic Image (동심원 모자이크 영상 표현을 위한 Lifting을 이용한 3차원 웨이브렛 압축)

  • Jang Sun-Bong;Jee Inn-Ho
    • Journal of Broadcast Engineering
    • /
    • v.11 no.2 s.31
    • /
    • pp.164-173
    • /
    • 2006
  • The data structure of the concentric mosaic can be regarded as a video sequence with a slowly panning camera. We take a concentric mosaic with match or alignment of video sequences. Also the concentric mosaic required for huge memory. Thus, compressing is essential in order to use the concentric mosaic. Therefore we need the algorithm that compressed data structure was maintained and the scene was decoded. In this paper, we used 3D lifting transform to compress concentric mosaic. Lifting transform has a merit of wavelet transform and reduces computation quantities and memory. Because each frame has high correlation, the complexity which a scene is detected form 3D transformed bitstream is increased. Thus, in order to have higher performance and decrease the complexity of detecting of a scene we executed 3D lifting and then transformed data set was sequently compressed with each frame unit. Each frame has a flexible bit rate. Also, we proposed the algorithm that compressed data structure was maintained and the scene was decoded by using property of lifting structure.