• Title/Summary/Keyword: Block-based Image Compression

Search Result 179, Processing Time 0.027 seconds

Block Based Efficient JPEG Encoding Algorithm for HDR Images (블록별 양자화를 이용한 HDR 영상의 효율적인 JPEG 압축 기법)

  • Lee, Chul;Kim, Chang-Su
    • Journal of IKEEE
    • /
    • v.11 no.4
    • /
    • pp.219-226
    • /
    • 2007
  • An efficient block based two-layer JPEG encoding algorithm is proposed to compress high dynamic range (HDR) images in this work. The proposed algorithm separates an input HDR image into a tone-mapped low dynamic range (LDR) image and a ratio image, which represents the quotients of the original HDR pixels divided by the tone-mapped LDR pixels. Then, the tone-mapped LDR image is compressed using the standard JPEG scheme to preserve backward compatibility and the ratio image is encoded to minimize a cost function that models the perception of each block with different quantization parameters in the human visual system (HVS). Simulation results show that the proposed algorithm provides better performance than the conventional method, which encodes the ratio image without any prior information of blocks.

  • PDF

Fast Disparity Vector Estimation using Motion vector in Stereo Image Coding (스테레오 영상에서 움직임 벡터를 이용한 고속 변이 벡터 추정)

  • Doh, Nam-Keum;Kim, Tae-Yong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.56-65
    • /
    • 2009
  • Stereoscopic images consist of the left image and the right image. Thus, stereoscopic images have much amounts of data than single image. Then an efficient image compression technique is needed, the DPCM-based predicted coding compression technique is used in most video coding standards. Motion and disparity estimation are needed to realize the predicted coding compression technique. Their performing algorithm is block matching algorithm used in most video coding standards. Full search algorithm is a base algorithm of block matching algorithm which finds an optimal block to compare the base block with every other block in the search area. This algorithm presents the best efficiency for finding optimal blocks, but it has very large computational loads. In this paper, we have proposed fast disparity estimation algorithm using motion and disparity vector information of the prior frame in stereo image coding. We can realize fast disparity vector estimation in order to reduce search area by taking advantage of global disparity vector and to decrease computational loads by limiting search points using motion vectors and disparity vectors of prior frame. Experimental results show that the proposed algorithm has better performance in the simple image sequence than complex image sequence. We conclude that the fast disparity vector estimation is possible in simple image sequences by reducing computational complexities.

Method of Lossless Image Compression Using Hybrid Bitplane Coding (비트평면 혼합 코딩을 이용한 무손실 이미지 압축방법)

  • Moon, Young-Ho;Choi, Jong-Bum;Sim, Woo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.10C
    • /
    • pp.961-967
    • /
    • 2009
  • In this paper, the lossless compression method is proposed for an 8-bit bitplane of the input image. The lower bitplanes are not well compressed because of irregularity of pixels. To overcome these drawbacks, this paper propose a mixed coding method that using the block-based lossless compression and the bit-based losselss compression, introducing the H. 264 and the JBIG. First, to take advantage of the characteristics of the bitplanes, 8-bitplane against the top 4 bits and lower 4 bits were separated. Next, the JBIG compression method was used in separated top 4-bitplane because of a lot of correlation between bits. And a separated lower 4-bitplane was applied the improved method that using the H. 264 lossless prediction. A pre-processing method applied to the lower 4-bitplane then irregular distribution of pixel values are converted to regular. Using the proposed method to test for various test images were performed. Experimental results from a printer using 8-bit image compared to JBIG average 19%, lower 4bit image compression performance with an average of 11% could be obtained.

Cloudy Area Detection in Satellite Image using K-Means & GHA (K-Means 와 GHA를 이용한 위성영상 구름영역 검출)

  • 서석배;김종우;최해진
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.405-408
    • /
    • 2003
  • This paper proposes a new algorithm for cloudy area detection using K-Means and GHA (Generalized Hebbian Algorithm). K-Means is one of simple classification algorithm, and GHA is unsupervised neural network for data compression and pattern classification. Proposed algorithm is based on block based image processing that size is l6$\times$l6. Experimental results shows good performance of cloudy area detection except blur cloudy areas.

  • PDF

Blocking artifacts reduction for improving visual quality of highly compressed images (압축영상의 화질향상을 위한 블록킹 현상 제거에 관한 연구)

  • 이주홍;김민구;정제창;최병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.8
    • /
    • pp.1677-1690
    • /
    • 1997
  • Block-transform coding is one of the most popular approaches for image compression. For example, DCT is widely used in the internaltional standards standards such as MPEG-1, MPEG-2, JPEG, and H.261. In the block-based transform coding, blocking artifacts may appear along block boundaries, and they can cause severe image degradation eqpecially when the transform coefficients are coarsely quantized. In this paper, we propose a new method for blocking artifacts reduction in transform-coded images. For blocking artifacts reduction, we add a correction term, on a block basis, composed of a linear combination of 28 basis images that are orthonormal on block boundaries. We select 28 DCT kernel functions of which boundary values are linearly independent, and Gram-Schmidt process is applied to the boundary values in order to obtain 28 boundary-orthonormal basis images. A threshold of bolock discontinuity is introduced for improvement of visual quality by reducing image blurring. We also investigate the number of basis images needed for efficient blocking artifacts reduction when the compression ratio changes.

  • PDF

A Research on the Vector Search Algorithm for the PIV Flow Analysis of image data with large dynamic range (입자의 이동거리가 큰 영상데이터의 PIV 유동 해석을 위한 속도벡터 추적 알고리즘의 연구)

  • Kim Sung Kyun
    • 한국전산유체공학회:학술대회논문집
    • /
    • 1998.11a
    • /
    • pp.13-18
    • /
    • 1998
  • The practical use of the particle image velocimetry(PIV), a whole-field velocity measurement method, requires the use of fast, reliable, computer-based methods for tracking velocity vectors. The full search block matching, the most widely studied and applied technique both in area of PIV and Image Coding and Compression, is computationally costly. Many less expensive alternatives have been proposed mostly in the area of Image Coding and Compression. Among others, TSS, NTSS, HPM are introduced for the past PIV analysis, and found to be successful. But, these algorithms are based on small dynamic range, 7 pixels/frame in maximum displacement. To analyze the images with large displacement, Even and Odd field image separation and a simple version of multi-resolution hierarchical procedures are introduced in this paper. Comparison with other algorithms are summarized. A Results of application to the turbulent backward step flow shows the improvement of new algorithm.

  • PDF

APBT-JPEG Image Coding Based on GPU

  • Wang, Chengyou;Shan, Rongyang;Zhou, Xiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.4
    • /
    • pp.1457-1470
    • /
    • 2015
  • In wireless multimedia sensor networks (WMSN), the latency of transmission is an increasingly problem. With the improvement of resolution, the time cost in image and video compression is more and more, which seriously affects the real-time of WMSN. In JPEG system, the core of the system is DCT, but DCT-JPEG is not the best choice. Block-based DCT transform coding has serious blocking artifacts when the image is highly compressed at low bit rates. APBT is used in this paper to solve that problem, but APBT does not have a fast algorithm. In this paper, we analyze the structure in JPEG and propose a parallel framework to speed up the algorithm of JPEG on GPU. And we use all phase biorthogonal transform (APBT) to replace the discrete cosine transform (DCT) for the better performance of reconstructed image. Therefore, parallel APBT-JPEG is proposed to solve the real-time of WMSN and the blocking artifacts in DCT-JPEG in this paper. We use the CUDA toolkit based on GPU which is released by NVIDIA to design the parallel algorithm of APBT-JPEG. Experimental results show that the maximum speedup ratio of parallel algorithm of APBT-JPEG can reach more than 100 times with a very low version GPU, compared with conventional serial APBT-JPEG. And the reconstructed image using the proposed algorithm has better performance than the DCT-JPEG in terms of objective quality and subjective effect. The proposed parallel algorithm based on GPU of APBT also can be used in image compression, video compression, the edge detection and some other fields of image processing.

Determination of the Proper Block Size for Estimating the Fractal Dimension (프락탈 디멘션을 근사하기 위한 적당한 브록 크기 결정에 관한 연구)

  • Jang, Jong-Hwan
    • The Journal of Natural Sciences
    • /
    • v.7
    • /
    • pp.67-73
    • /
    • 1995
  • In this paper, a new texture segmentation-based image coding technique which performs segmentation based on properties of the human visual system (HVS) is presented. This method solves the problems of a segmentation-based image coding technique with constant segments by proposing a methodology for segmenting an image into texturally homogeneous regions with respect to the degree of roughness as perceived by the HVS. The segmentation is accomplished by thresholding the fractal dimension so that textural regions are classified into three texture classes; perceived constant intensity, smooth texture, and rough texture. It is very important to determine the proper block size for estimating the fractal dimension. Good quality reconstructed images are obtained with about 0.1 to 0.25 bit per pixel (bpp) for many different types of imagery.

  • PDF

An Image Compression Algorithm Using the WDCT (Warped Discrete Cosine Transform) (WDCT(Warped Discrete Cosine Transform)를 이용한 영상 압축 알고리듬)

    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.12B
    • /
    • pp.2407-2414
    • /
    • 1999
  • This paper introduces the concept of warped discrete cosine transform (WDCT) and an image compression algorithm based on the WDCT. The proposed WDCT is a cascade connection of a conventional DCT and all-pass filters whose parameters can be adjusted to provide frequency warping. In the proposed image compression scheme, the frequency response of the all-pass filter is controlled by a set of parameters with each parameter for a specified frequency range. For each image block, the best parameter is chosen from the set and is sent to the decoder as a side information along with the result of corresponding WDCT computation. For actual implementation, the combination of the all-pass IIR filters and the DCT can be viewed as a cascade of a warping matrix and the DCT matrix, or as a filter bank which is obtained by warping the frequency response of the DCT filter bank. Hence, the WDCT can be implemented by a single matrix computation like the DCT. The WDCT based compression, outperforms the DCT based compression, for high bit rate applications and for images with high frequency components.

  • PDF

On Extending JPEG Compression Method Using Line-based Differential Coding (행/열 단위 증분 부호화를 이용한 JPEG 압축 기법 확장에 관한 연구)

  • Park, Dae-Hyun;Ahn, Young-Hoon;Shin, Hyun-Joon;Wee, Young-Cheul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.15 no.3
    • /
    • pp.11-18
    • /
    • 2009
  • In this paper, we introduce a novel method to extend the JPEG standard, which is the most widely used for lossy image compression, in order to improve compression ratio. To employ two of the most successful methodologies for the data compression: differential coding and quantization simultaneously, we propose a line-based approach. For each line in a block, we apply one-dimensional discrete cosine transformation to the increments instead of the pixel values. Those values are quantized and entropy-coded similarly to the JPEG standard. To further increase compression ratio, the proposed method is plugged into the JPEG standard to form a new compression method, in which the proposed method are applied to only selected JPEG blocks. In our experiment, we found that the proposed method outperform the JPEG standard when the qualities of the coded images are set to be high. We believe the proposed method can be simply plugged into the standard to improve its compression ratio for higher quality images.

  • PDF