• Title/Summary/Keyword: Block-based Image Compression

Search Result 179, Processing Time 0.03 seconds

Stereo image compression based on error concealment for 3D television (3차원 텔레비전을 위한 에러 은닉 기반 스테레오 영상 압축)

  • Bak, Sungchul;Sim, Donggyu;Namkung, Jae-Chan;Oh, Seoung-jun
    • Journal of Broadcast Engineering
    • /
    • v.10 no.3
    • /
    • pp.286-296
    • /
    • 2005
  • This paper presents a stereo-based image compression and transmission system for 3D realistic television. In the proposed system, a disparity map is extracted from an input stereo image pair and the extracted disparity map and one of two input images are transmitted or stored at a local or remote site. However, correspondences can not be determined in occlusion areas. Thus, it is not easy to recover 3D information in such regions. In this paper, a reconstruction image compensation algorithm based on error block concealment and in-loop filtering is proposed to minimize the reconstruction error in generating stereo image pair. The effectiveness of the proposed algorithm is shown in term of objective accuracy of reconstruction image with several real stereo image pairs.

An Adaptive BTC Algorithm Using the Characteristics of th Error Signals for Efficient Image Compression (차신호 특성을 이용한 효율적인 적응적 BTC 영상 압축 알고리듬)

  • 이상운;임인칠
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.4
    • /
    • pp.25-32
    • /
    • 1997
  • In this paper, we propose an adaptive BTC algorithm using the characteristics of the error signals. The BTC algorithm has a avantage that it is low computational complexity, but a disadvantage that it produces the ragged edges in the reconstructed images for th esloping regions beause of coding the input with 2-level signals. Firstly, proposed methods classify the input into low, medium, and high activity blocks based on the variance of th einput. Using 1-level quantizer for low activity block, 2-level for medium, and 4-level for high, it is adaptive methods that reduce bit rates and the inherent quantization noises in the 2-level quantizer. Also, in case of processing high activity block, we propose a new quantization level allocation algorithm using the characteristics of the error signals between the original signals and the reconstructed signals used by 2-level quantizer, in oder that reduce bit rates superior to the conventional 4-level quantizer. Especially, considering the characteristics of input block, we reduce the bit rates without incurrng the visual noises.

  • PDF

Coding Artifact Reduction for Block-based Image Compression (블록 기반 영상 압축을 위한 부호화 결함 감소)

  • Wee, Young-Cheul
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.1
    • /
    • pp.60-64
    • /
    • 2011
  • In this paper, we propose a new post-processing technique that removes blocking and ringing artifacts in Block discrete cosine transformation (BDCT)-coded images using bilateral filtering. The selection of filter parameters is a key issue in the application of a bilateral filter because it significantly affects the result. An efficient method of selecting the bilateral filter parameters is presented. The experimental results show that the proposed approach alleviates the artifacts efficiently in terms of PSNR, MSDS, and SSIM.

Quasi-Orthogonal Space-Time Block Codes Designs Based on Jacket Transform

  • Song, Wei;Lee, Moon-Ho;Matalgah, Mustafa M.;Guo, Ying
    • Journal of Communications and Networks
    • /
    • v.12 no.3
    • /
    • pp.240-245
    • /
    • 2010
  • Jacket matrices, motivated by the complex Hadamard matrix, have played important roles in signal processing, communications, image compression, cryptography, etc. In this paper, we suggest a novel approach to design a simple class of space-time block codes (STBCs) to reduce its peak-to-average power ratio. The proposed code provides coding gain due to the characteristics of the complex Hadamard matrix, which is a special case of Jacket matrices. Also, it can achieve full rate and full diversity with the simple decoding. Simulations show the good performance of the proposed codes in terms of symbol error rate. For generality, a kind of quasi-orthogonal STBC may be similarly designed with the improved performance.

A New Embedded Compression Algorithm for Memory Size and Bandwidth Reduction in Wavelet Transform Appliable to JPEG2000 (JPEG2000의 웨이블릿 변환용 메모리 크기 및 대역폭 감소를 위한 새로운 Embedded Compression 알고리즘)

  • Son, Chang-Hoon;Song, Sung-Gun;Kim, Ji-Won;Park, Seong-Mo;Kim, Young-Min
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.94-102
    • /
    • 2011
  • To alleviate the size and bandwidth requirement in JPEG2000 system, a new Embedded Compression(EC) algorithm with minor image quality drop is proposed. For both random accessibility and low latency, very simple and efficient hadamard transform based compression algorithm is devised. We reduced LL intermediate memory and code-block memory to about half size and achieved significant memory bandwidth reductions(about 52~73%) through proposed multi-mode algorithms, without requiring any modification in JPEG2000 standard algorithm.

Design of an Efficient Lossless CODEC for Wavelet Coefficients (웨이블릿 계수에 대한 효율적인 무손실 부호화 및 복호화기 설계)

  • Lee, Seonyoung;Kyeongsoon Cho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.5
    • /
    • pp.335-344
    • /
    • 2003
  • The image compression based on discrete wavelet transform has been widely accepted in industry since it shows no block artifacts and provides a better image quality when compressed to low bits per pixel, compared to the traditional JPEG. The coefficients generated by discrete wavelet transform are quantized to reduce the number of code bits to represent them. After quantization, lossless coding processes are usually applied to make further reduction. This paper presents a new and efficient lossless coding algorithm for quantified wavelet coefficients based on the statistical properties of the coefficients. Combined with discrete wavelet transform and quantization processes, our algorithm has been implemented as an image compression chip, using 0.5${\mu}{\textrm}{m}$ standard cells. The experimental results show the efficiency and performance of the resulting chip.

The Optimal Thresholding Technique for an Efficient Quadtree Segmentation (효율적인 Quadtree 분할을 위한 최적의 임계값 설정 기술)

  • Lee, Hang-Chan
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.8
    • /
    • pp.1031-1036
    • /
    • 1999
  • A Hierarchical vector Quantization scheme is implemented and an optimal thresholding technique of quadtree segmentation for performing high quality low bit rate image compression is proposes. A mathematical model is constructed under the assumption that the standard deviations of sub-blocks are larger than or equal to the standard deviation of the upper level block which is generated by merging of sub-blocks. This thresholding technique based on the mathematical modeling allows producing about 1 dB improved performance in terms of PSNR at most ranges of bit rates over the quadtree coder, which is based on MSE for quadtree segmentation.

  • PDF

A study on application of DCT algorithm with MVP(Multimedia Video Processor) (MVP(Multimedia Video Processor)를 이용한 DCT알고리즘 구현에 관한 연구)

  • 김상기;정진현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1383-1386
    • /
    • 1997
  • Discrete cosine transform(DCT) is the most popular block transform coding in lossy mode. DCT is close to statistically optimal transform-the Karhunen Loeve transform. In this paper, a module for DCT encoder is made with TMS320C80 based on JPEG and MPEG, which are intermational standards for image compression. the DCT encoder consists of three parts-a transformer, a vector quantizer and an entropy encoder.

  • PDF

Adaptive Block Watermarking Based on JPEG2000 DWT (JPEG2000 DWT에 기반한 적응형 블록 워터마킹 구현)

  • Lim, Se-Yoon;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.101-108
    • /
    • 2007
  • In this paper, we propose and verify an adaptive block watermarking algorithm based on JPEG2000 DWT, which determines watermarking for the original image by two scaling factors in order to overcome image degradation and blocking problem at the edge. Adaptive block watermarking algorithm uses 2 scaling factors, one is calculated by the ratio of present block average to the next block average, and the other is calculated by the ratio of total LL subband average to each block average. Signals of adaptive block watermark are obtained from an original image by itself and the strength of watermark is automatically controlled by image characters. Instead of conventional methods using identical intensity of a watermark, the proposed method uses adaptive watermark with different intensity controlled by each block. Thus, an adaptive block watermark improves the visuality of images by 4$\sim$14dB and it is robust against attacks such as filtering, JPEG2000 compression, resizing and cropping. Also we implemented the algorithm in ASIC using Hynix 0.25${\mu}m$ CMOS technology to integrate it in JPEG2000 codec chip.

A Study On Still Image Codig With the TMS320C80 (TMS320C80을 이용한 정지 영상 부호화에 관한 연구)

  • Kim, Sang-Gi;Jeong, Jin-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1106-1111
    • /
    • 1999
  • Discrete cosine Transform (DCT) is most popular block transform coding in lossy mode. DCT is close to statistically optimal transform - the Karhunen Loeve transform. In this paper, a module for still image encoder is implemented with TMS320C80 based on JPEG, which are international standards for image compression. Th still image encoder consists of three parts- a transformer, a vector quantizer and an entropy encoder.

  • PDF