• Title/Summary/Keyword: block coded image

Search Result 51, Processing Time 0.024 seconds

Blocking Artifact Reduction in Block-Coded Image Using Interpolation and SAF Based on Edge Map

  • Park, Kyung-Nam;Lee, Gun-Woo;Kwon, Kee-Koo;Kim, Bong-Seok;Lee, Kuhn-Il
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1007-1010
    • /
    • 2002
  • In this paper, we present a new blocking artifact reduction algorithm using interpolation and signal adaptive filter (SAF) based on the edge map. Generally, block-based coding, such as JPEG and MPEG, is the most popular image compression method. However, for high compression it produces noticeable blocking and ringing artifacts in the decoded image. In proposed method, all the block is classified into low and high frequency blocks in block classification procedure. And edge map is obtained by using Sobel operator on decoded image. And according to the block property we applied blocking artifacts reduction algorithm. Namely, four neighbor low frequency block is participated in interpolation based on edge map. And ringing artifacts is removed by applying a signal adaptive filter around the edge using edge map in high frequency block. The computer simulation results confirmed a better performance by the proposed method in both the subjective and objective image qualities.

  • PDF

Zerotree Entropy Based Coding of Stereo Video Sequences

  • Thanapirom, S.;Fernando, W.A.C.;Edirisinghe, E.A.
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.908-911
    • /
    • 2002
  • Over the past 30 years, many efficient 2D video coding techniques have been presented and developed from many research centers for commercialization. However, direct application of these monocular compression schemes is not optimal for stereo video coding. In this paper, we present a new technique for coding stereo video sequences based on Discrete Wavelet Transform (DWT). The proposed technique exploits Zerotree Entropy Coding (ZTE) that makes use of the wavelet block concept to achieve low bit rate stereo video coding. The one of two image streams, called main stream, is independently coded by modified MPEG-4 encoder and the other stream, called auxiliary stream, is coded by predicting from its corresponding image, its previous image or its follow image.

  • PDF

Block Loss Recovery Using Fractal Extrapolation for Fractal Coded Images (프랙탈 외삽을 이용한 프랙탈 부호화 영상에서의 블록 손실 복구)

  • 노윤호;소현주;김상현;김남철
    • Journal of Broadcast Engineering
    • /
    • v.4 no.1
    • /
    • pp.76-85
    • /
    • 1999
  • The degradation of image quality by block loss is more serious in fractal coded images with the error propagation due to mapping from the lost blocks than in DCT coded images. Therefore. a new algorithm is presented for recovering the blocks lost in the transmission through the lossy network as A TM network of the images coded by Jacquins fractal coding. Jacquins fractal code is divided into two layers of header code and main code according to its importance. The key technique of the proposed BLRA (block loss recovery algorithm) is a fractal extrapolation that estimates the lost pixels by using the contractive mapping parameters of the neighboring range blocks whose characteristics are similar to a lost block. The proposed BLRA is applied to the lost blocks in the iteration of decoding. Some experimental results show the proposed BLRA yields excellent performance in PSNR as well as subjective quality.

  • PDF

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.

Design of an Efficient VLSI Architecture and Verification using FPGA-implementation for HMM(Hidden Markov Model)-based Robust and Real-time Lip Reading (HMM(Hidden Markov Model) 기반의 견고한 실시간 립리딩을 위한 효율적인 VLSI 구조 설계 및 FPGA 구현을 이용한 검증)

  • Lee Chi-Geun;Kim Myung-Hun;Lee Sang-Seol;Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.2 s.40
    • /
    • pp.159-167
    • /
    • 2006
  • Lipreading has been suggested as one of the methods to improve the performance of speech recognition in noisy environment. However, existing methods are developed and implemented only in software. This paper suggests a hardware design for real-time lipreading. For real-time processing and feasible implementation, we decompose the lipreading system into three parts; image acquisition module, feature vector extraction module, and recognition module. Image acquisition module capture input image by using CMOS image sensor. The feature vector extraction module extracts feature vector from the input image by using parallel block matching algorithm. The parallel block matching algorithm is coded and simulated for FPGA circuit. Recognition module uses HMM based recognition algorithm. The recognition algorithm is coded and simulated by using DSP chip. The simulation results show that a real-time lipreading system can be implemented in hardware.

  • PDF

Post-Processing for JPEG-Coded Image Deblocking via Sparse Representation and Adaptive Residual Threshold

  • Wang, Liping;Zhou, Xiao;Wang, Chengyou;Jiang, Baochen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1700-1721
    • /
    • 2017
  • The problem of blocking artifacts is very common in block-based image and video compression, especially at very low bit rates. In this paper, we propose a post-processing method for JPEG-coded image deblocking via sparse representation and adaptive residual threshold. This method includes three steps. First, we obtain the dictionary by online dictionary learning and the compressed images. The dictionary is then modified by the histogram of oriented gradient (HOG) feature descriptor and K-means cluster. Second, an adaptive residual threshold for orthogonal matching pursuit (OMP) is proposed and used for sparse coding by combining blind image blocking assessment. At last, to take advantage of human visual system (HVS), the edge regions of the obtained deblocked image can be further modified by the edge regions of the compressed image. The experimental results show that our proposed method can keep the image more texture and edge information while reducing the image blocking artifacts.

Post-processing Technique based on POCS for visual Enhancement (POCS를 이용한 효과적인 블록 현상 제거 기법)

  • Kim, Yoon;Jung, Jae-Han;Kim, Jae-Won;Ko, Sung-Jea
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.755-758
    • /
    • 2001
  • In this paper. Ive propose a postprocessing technique based on the theory of projection on convex sets(POCS) to reduce the blocking artifacts in HDTV decoded images. In BDCT of HDTV. the image is divided into a grid of non-overlapped 8 ${\times}$ 8 blocks. and then each block is coded separately. A block which is located one pixel apart from the grid of BDCT will include the boundary of the original 8 ${\times}$ 8 block. If the blocking artifact is Introduced alone the block boundary. this block will have different frequency characteristic from that of the original block. Thus, a comparison of frequency characteristics of these two overlapping blocks can detect the undesired high-frequency components mainly caused by the blocking artifact. By eliminating these undesired high-frequency components adaptively, robust smoothing projection operator can be obtained. Simulation results with real image sequences indicate that the proposed method performs better than conventional algorithms.

  • PDF

Multiresolution Wavelet-Based Disparity Estimation for Stereo Image Compression

  • Tengcharoen, Chompoonuch;Varakulsiripunth, Ruttikorn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1098-1101
    • /
    • 2004
  • The ordinary stereo image of an object consists of data of left and right views. Therefore, the left and right image pairs have to be transmitted simultaneously in order to display 3-dimentional video at the remote site. However, due to the twice data in comparing with a monoscopic image of the same object, it needs to be compressed for fast transmission and resource saving. Hence, it needs an effective coding algorithm for compressing stereo image. It was found previously that compressing left and right frames independently will achieve the compression ratio lower than compressing by utilizing the spatial redundancy between both frames. Therefore, in this paper, we study the stereo image compression technique based on the multiresolution wavelet transform using varied disparity-block size for estimation and compensation. The size of disparity-block in the stereo pair subbands are scaling on a coarse-to-fine wavelet coefficients strategy. Finally, the reference left image and residual right image after disparity estimation and compensation are coded by using SPIHT coding. The considered method demonstrates good performance in both PSNR measures and visual quality for stereo image.

  • PDF

A New Proposal of Adaptive BTC for Image Data Compression (畵像壓縮을 위한 適應 BTC 方法의 提案)

  • Jang, Ki-Soong;Oh, Seong-Mock;Lee, Young-Choul
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.7
    • /
    • pp.125-131
    • /
    • 1989
  • This paper proposes a new ABTC (Adaptive Block Truncation Coding) algorithm which improves the BTC algorithm for image data compression. A new adaptive block truncation coding which adopts a selective coding scheme depending on the local characteristics of an image has been described. The characteristics of the ABTC algorithm can be summarized as high compression ratio and the algorithm simplicity. Using this algorithm, color images can be coded at a variable bit rate from 1.0 (bit/pel) to 2.56 (bit/pel) and high compression rate (1.3-105 bit/pel) can be achieved without conspicuous image degradation compared with original images.

  • PDF

Frame Rate up-conversion Algorithm using Adaptive Overlapped Block Motion Compensation (적응적 중첩 블록 움직임 보상을 이용한 프레임 율 향상 알고리즘)

  • Lee, Kangjun
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.785-790
    • /
    • 2019
  • In this paper, a new bilateral frame rate up-conversion algorithm using adaptive overlapped block motion compensation is proposed. In this algorithm, the adaptive overlapped block motion compensation is based on the motion complexity of the reference region. As the motion complexity is determined by the size of the previously coded motion estimation prediction, the overlapped block motion compensation method is selected without any additional computational complexity. Experimental results show that the proposed algorithm provides better image quality than conventional methods both objectively and subjectively.