• Title/Summary/Keyword: Compressed Images

Search Result 267, Processing Time 0.022 seconds

Texture Image Database Retrieval Using JPEG-2000 Partial Entropy Decoding (JPEG-2000 부분 엔트로피 복호화에 의향 질감 영상 데이터베이스 검색)

  • Park, Ha-Joong;Jung, Ho-Youl
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.5C
    • /
    • pp.496-512
    • /
    • 2007
  • In this paper, we propose a novel JPEG-2000 compressed image retrieval system using feature vector extracted through partial entropy decoding. Main idea of the proposed method is to utilize the context information that is generated during entropy encoding/decoding. In the framework of JPEG-2000, the context of a current coefficient is determined depending on the pattern of the significance and/or the sign of its neighbors in three bit-plane coding passes and four coding modes. The contexts provide a model for estimating the probability of each symbol to be coded. And they can efficiently describe texture images which have different pattern because they represent the local property of images. In addition, our system can directly search the images in the JPEG-2000 compressed domain without full decompression. Therefore, our proposed scheme can accelerate the work of retrieving images. We create various distortion and similarity image databases using MIT VisTex texture images for simulation. we evaluate the proposed algorithm comparing with the previous ones. Through simulations, we demonstrate that our method achieves good performance in terms of the retrieval accuracy as well as the computational complexity.

Efficient Compression Schemes for Double Random Phase-encoded Data for Image Authentication

  • Gholami, Samaneh;Jaferzadeh, Keyvan;Shin, Seokjoo;Moon, Inkyu
    • Current Optics and Photonics
    • /
    • v.3 no.5
    • /
    • pp.390-400
    • /
    • 2019
  • Encrypted images obtained through double random phase-encoding (DRPE) occupy considerable storage space. We propose efficient compression schemes to reduce the size of the encrypted data. In the proposed schemes, two state-of-art compression methods of JPEG and JP2K are applied to the quantized encrypted phase images obtained by combining the DRPE algorithm with the virtual photon counting imaging technique. We compute the nonlinear cross-correlation between the registered reference images and the compressed input images to verify the performance of the compression of double random phase-encoded images. We show quantitatively through experiments that considerable compression of the encrypted image data can be achieved while security and authentication factors are completely preserved.

Content Analysis-based Adaptive Filtering in The Compressed Satellite Images (위성영상에서의 적응적 압축잡음 제거 알고리즘)

  • Choi, Tae-Hyeon;Ji, Jeong-Min;Park, Joon-Hoon;Choi, Myung-Jin;Lee, Sang-Keun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.84-95
    • /
    • 2011
  • In this paper, we present a deblocking algorithm that removes grid and staircase noises, which are called "blocking artifacts", occurred in the compressed satellite images. Particularly, the given satellite images are compressed with equal quantization coefficients in row according to region complexity, and more complicated regions are compressed more. However, this approach has a problem that relatively less complicated regions within the same row of complicated regions have blocking artifacts. Removing these artifacts with a general deblocking algorithm can blur complex and undesired regions as well. Additionally, the general filter lacks in preserving the curved edges. Therefore, the proposed algorithm presents an adaptive filtering scheme for removing blocking artifacts while preserving the image details including curved edges using the given quantization step size and content analysis. Particularly, WLFPCA (weighted lowpass filter using principle component analysis) is employed to reduce the artifacts around edges. Experimental results showed that the proposed method outperforms SA-DCT in terms of subjective image quality.

A Robust License Plate Extraction Method for Low Quality Images (저화질 영상에서 강건한 번호판 추출 방법)

  • Lee, Yong-Woo;Kim, Hyun-Soo;Kang, Woo-Yun;Kim, Gyeong-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.2
    • /
    • pp.8-17
    • /
    • 2008
  • This paper proposes a robust license plate extraction method from images taken under unconstrained environments. Utilization of the color and the edge information in complementary fashion makes the proposed method deal with not only various lighting conditions, hilt blocking artifacts frequently observed in compressed images. Computational complexity is significantly reduced by applying Hough transform to estimate the skew angle, and subsequent do-skewing procedure only to the candidate regions. The true plate region is determined from the candidates under examination using clues including the aspect ratio, the number of zero crossings from vertical scan lines, and the number of connected components. The performance of the proposed method is evaluated using compressed images collected under various realistic circumstances. The experimental results show 94.9% of correct license plate extraction rate.

A Real-Time Video Stitching Algorithm in H.264/AVC Compressed Domain (실시간 H.264/AVC 압축 영역에서의 영상 합성 알고리즘)

  • Gankhuyag, Ganzorig;Hong, Eun Gi;Kim, Giyeol;Kim, Younghwan;Choe, Yoonsik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.6
    • /
    • pp.503-511
    • /
    • 2014
  • In this paper, a novel, real-time video stitching algorithm in an H.264/AVC compressed domain is proposed. This enables viewers to watch multiple video contents using a single device. The basic concept of this paper is that the server is asked to combine multiple streams into one bit-stream based in a compressed domain. In other words, this paper presents a new compressed domain combiner that works in boundary macroblocks of input videos with re-calculating intra prediction mode, intra prediction MVD, a re-allocation of the coefficient table, and border extension methods. The rest of the macroblocks of the input video data are achieved simply by copying them. Simulation experiments have demonstrated the possibility and effectiveness of the proposed algorithm by showing that it is able to generate more than 103 frames per second, stitching four 480p-sized images into each frame.

Understanding on the Principle of Image Compression Algorithm Using on the DCT (discrete cosine transform) (이산여현변환을 이용한 이미지 압축 알고리즘 원리에 관한 연구)

  • Nam, Soo-tai;Kim, Do-goan;Jin, Chan-yong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.107-110
    • /
    • 2018
  • Image compression is the application of Data compression on digital images. The (DCT) discrete cosine transform is a technique for converting a time domain to a frequency domain. It is widely used in image compression. First, the image is divided into 8x8 pixel blocks. Apply the DCT to each block while processing from top to bottom from left to right. Each block is compressed through quantization. The space of the compressed block array constituting the image is greatly reduced. Reconstruct the image through the IDCT. The purpose of this research is to understand compression/decompression of images using the DCT method.

  • PDF

Spatial Error Concealment Technique for Losslessly Compressed Images Using Data Hiding in Error-Prone Channels

  • Kim, Kyung-Su;Lee, Hae-Yeoun;Lee, Heung-Kyu
    • Journal of Communications and Networks
    • /
    • v.12 no.2
    • /
    • pp.168-173
    • /
    • 2010
  • Error concealment techniques are significant due to the growing interest in imagery transmission over error-prone channels. This paper presents a spatial error concealment technique for losslessly compressed images using least significant bit (LSB)-based data hiding to reconstruct a close approximation after the loss of image blocks during image transmission. Before transmission, block description information (BDI) is generated by applying quantization following discrete wavelet transform. This is then embedded into the LSB plane of the original image itself at the encoder. At the decoder, this BDI is used to conceal blocks that may have been dropped during the transmission. Although the original image is modified slightly by the message embedding process, no perceptible artifacts are introduced and the visual quality is sufficient for analysis and diagnosis. In comparisons with previous methods at various loss rates, the proposed technique is shown to be promising due to its good performance in the case of a loss of isolated and continuous blocks.

A Study of Regularized Iterative Postprocessing of Wavelet-compressed Images (웨이블릿 압축된 영상의 정칙화 기반 후처리에 관한 연구)

  • Jung, Jung-Hoon;Jung, Shi-Chang;Paik, JoonKi
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.11
    • /
    • pp.44-53
    • /
    • 1999
  • This paper proposes an algorithm that postprocesses wavelet-compressed images by using regularized iterative image restoration. First, an appropriate modeling the image degradation system for wavelet-compression system is needed. Then, the method which uses one of nonlinear functions as constraint in regularized iterative restoration is proposed in order to remove coding artifacts efficiently, such as ringing artifact and blocking artifact, resulted from loss of high frequency coefficients. Lastly, experimental results show superiority of proposed algorithm as compared with existing algorithm.

  • PDF

A Novel Reversible Data Hiding Scheme for VQ-Compressed Images Using Index Set Construction Strategy

  • Qin, Chuan;Chang, Chin-Chen;Chen, Yen-Chang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.8
    • /
    • pp.2027-2041
    • /
    • 2013
  • In this paper, we propose a novel reversible data hiding scheme in the index tables of the vector quantization (VQ) compressed images based on index set construction strategy. On the sender side, three index sets are constructed, in which the first set and the second set include the indices with greater and less occurrence numbers in the given VQ index table, respectively. The index values in the index table belonging to the second set are added with prefixes from the third set to eliminate the collision with the two derived mapping sets of the first set, and this operation of adding prefixes has data hiding capability additionally. The main data embedding procedure can be achieved easily by mapping the index values in the first set to the corresponding values in the two derived mapping sets. The same three index sets reconstructed on the receiver side ensure the correctness of secret data extraction and the lossless recovery of index table. Experimental results demonstrate the effectiveness of the proposed scheme.