• Title/Summary/Keyword: Image Compression/Reconstruction

Search Result 83, Processing Time 0.022 seconds

Demosaicing based Image Compression with Channel-wise Decoder

  • Indra Imanuel;Suk-Ho Lee
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.74-83
    • /
    • 2023
  • In this paper, we propose an image compression scheme which uses a demosaicking network and a channel-wise decoder in the decoding network. For the demosaicing network, we use as the input a colored mosaiced pattern rather than the well-known Bayer pattern. The use of a colored mosaiced pattern results in the mosaiced image containing a greater amount of information pertaining to the original image. Therefore, it contributes to result in a better color reconstruction. The channel-wise decoder is composed of multiple decoders where each decoder is responsible for each channel in the color image, i.e., the R, G, and B channels. The encoder and decoder are both implemented by wavelet based auto-encoders for better performance. Experimental results verify that the separated channel-wise decoders and the colored mosaic pattern produce a better reconstructed color image than a single decoder. When combining the colored CFA with the multi-decoder, the PSNR metric exhibits an increase of over 2dB for three-times compression and approximately 0.6dB for twelve-times compression compared to the Bayer CFA with a single decoder. Therefore, the compression rate is also increased with the proposed method than with the method using a single decoder on the Bayer patterned mosaic image.

Adaptive Importance Channel Selection for Perceptual Image Compression

  • He, Yifan;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3823-3840
    • /
    • 2020
  • Recently, auto-encoder has emerged as the most popular method in convolutional neural network (CNN) based image compression and has achieved impressive performance. In the traditional auto-encoder based image compression model, the encoder simply sends the features of last layer to the decoder, which cannot allocate bits over different spatial regions in an efficient way. Besides, these methods do not fully exploit the contextual information under different receptive fields for better reconstruction performance. In this paper, to solve these issues, a novel auto-encoder model is designed for image compression, which can effectively transmit the hierarchical features of the encoder to the decoder. Specifically, we first propose an adaptive bit-allocation strategy, which can adaptively select an importance channel. Then, we conduct the multiply operation on the generated importance mask and the features of the last layer in our proposed encoder to achieve efficient bit allocation. Moreover, we present an additional novel perceptual loss function for more accurate image details. Extensive experiments demonstrated that the proposed model can achieve significant superiority compared with JPEG and JPEG2000 both in both subjective and objective quality. Besides, our model shows better performance than the state-of-the-art convolutional neural network (CNN)-based image compression methods in terms of PSNR.

A Visual Reconstruction of Core Algorithm for Image Compression Based on the DCT (discrete cosine transform) (이산코사인변환 기반 이미지 압축 핵심 알고리즘 시각적 재구성)

  • Jin, Chan-yong;Nam, Soo-tai
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.180-181
    • /
    • 2018
  • JPEG is a most widely used standard image compression technology. This research introduces the JPEG image compression algorithm and describes each step in the compression and decompression. Image compression is the application of data compression on digital images. The DCT (discrete cosine transform) is a technique for converting a time domain to a frequency domain. First, the image is divided into 8 by 8 pixel blocks. Second, working from top to bottom left to right, the DCT is applied to each block. Third, each block is compressed through quantization. Fourth, the array of compressed blocks that make up the image is stored in a greatly reduced amount of space. Finally if desired, the image is reconstructed through decompression, a process using IDCT (inverse discrete cosine transform).

  • PDF

An Industry-Strength DVR System using an Efficient Compression Algorithm (효율적인 압축 알고리즘을 이용한 실용화 수준의 DVR 시스템)

  • 박영철;안재기
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.10 no.3
    • /
    • pp.243-250
    • /
    • 2004
  • We describe a practical implementation of DVR (Digital Video Recording) system. And we propose a new image compression algorithm, that input video signal is divided into two parts, a moving target and a non-moving background part to achieve efficient compression of image sequences. This algorithm reorganizes a target area and a back-ground area by use of Macro Block(MB) unit on encoding scheme. The proposed algorithm allows high quality image reconstruction at low bit rates.

Adaptively Compensated-Disparity Prediction Scheme for Stereo Image Compression and Reconstruction (스테레오 영상 압축 및 복원을 위한 적응적 변이보상 예측기법)

  • 배경훈;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.7A
    • /
    • pp.676-682
    • /
    • 2002
  • In this paper, an effective stereo image compression and reconstruction technique using a new adaptively compensated-disparity prediction scheme is proposed. That is, by adaptively predicting the mutual correlation between the stereo image using the proposed method, the bandwidth of the stereo input image can be compressed to the level of the conventional 2D image and the predicted image also can be effectively reconstructed using this transmitted reference image and disparity data in the receiver. Especially, in the proposed method, once the feature values are extracted from the input stereo image, then the matching window size for the predicted image reconstruction is adaptively selected in accordance with the magnitude of this feature values. From this adaptive disparity estimation method, reduction of the mismatching probability of the disparity vectors is expected and as a result, the image quality in the reconstructed image can be improved. In addition, from some experiments using the CCETT's stereo images of 'Fichier', 'Manege' and 'Tunnel', it is shown that the proposed method improves the PSNR of the reconstructed image to about 9.08 dB on average by comparing with that of the conventional methods. And also, it is found that there is almost no difference between the original image and the predicted image reconstructed through the proposed method by comparison to that of the conventional methods.

QuadTree-Based Lossless Image Compression and Encryption for Real-Time Processing (실시간 처리를 위한 쿼드트리 기반 무손실 영상압축 및 암호화)

  • Yoon, Jeong-Oh;Sung, Woo-Seok;Hwang, Chan-Sik
    • The KIPS Transactions:PartC
    • /
    • v.8C no.5
    • /
    • pp.525-534
    • /
    • 2001
  • Generally, compression and encryption procedures are performed independently in lossless image compression and encryption. When compression is followed by encryption, the compressed-stream should have the property of randomness because its entropy is decreased during the compression. However, when full data is compressed using image compression methods and then encrypted by encryption algorithms, real-time processing is unrealistic due to the time delay involved. In this paper, we propose to combine compression and encryption to reduce the overall processing time. It is method decomposing gray-scale image by means of quadtree compression algorithms and encrypting the structural part. Moreover, the lossless compression ratio can be increased using a transform that provides an decorrelated image and homogeneous region, and the encryption security can be improved using a reconstruction of the unencrypted quadtree data at each level. We confirmed the increased compression ratio, improved encryption security, and real-time processing by using computer simulations.

  • PDF

Super-resolution of compressed image by deep residual network

  • Jin, Yan;Park, Bumjun;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.59-61
    • /
    • 2018
  • Highly compressed images typically not only have low resolution, but are also affected by compression artifacts. Performing image super-resolution (SR) directly on highly compressed image would simultaneously magnify the blocking artifacts. In this paper, a SR method based on deep learning is proposed. The method is an end-to-end trainable deep convolutional neural network which performs SR on compressed images so as to reduce compression artifacts and improve image resolution. The proposed network is divided into compression artifacts removal (CAR) part and SR reconstruction part, and the network is trained by three-step training method to optimize training procedure. Experiments on JPEG compressed images with quality factors of 10, 20, and 30 demonstrate the effectiveness of the proposed method on commonly used test images and image sets.

  • PDF

Hardware Architecture and its Design of Real-Time Video Compression Processor for Motion JPEG2000 (Motion JPEG2000을 위한 실시간 비디오 압축 프로세서의 하드웨어 구조 및 설계)

  • 서영호;김동욱
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.1-9
    • /
    • 2004
  • In this paper, we proposed a hardware(H/W) structure which can compress and recontruct the input image in real time operation and implemented it into a FPGA platform using VHDL(VHSIC Hardware Description Language). All the image processing element to process both compression and reconstruction in a FPGA were considered each of them was mapped into a H/W with the efficient structure for FPGA. We used the DWT(discrete wavelet transform) which transforms the data from spatial domain to the frequency domain, because use considered the motion JPEG2000 as the application. The implemented H/W is separated to both the data path part and the control part. The data path part consisted of the image processing blocks and the data processing blocks. The image processing blocks consisted of the DWT Kernel for the filtering by DWT, Quantizer/Huffman Encoder, Inverse Adder/Buffer for adding the low frequency coefficient to the high frequency one in the inverse DWT operation, and Huffman Decoder. Also there existed the interface blocks for communicating with the external application environments and the timing blocks for buffering between the internal blocks. The global operations of the designed H/W are the image compression and the reconstruction, and it is operated by the unit or a field synchronized with the A/D converter. The implemented H/W used the 54%(12943) LAB(Logic Array Block) and 9%(28352) ESB(Embedded System Block) in the APEX20KC EP20K600CB652-7 FPGA chip of ALTERA, and stably operated in the 70MHz clock frequency. So we verified the real time operation. that is. processing 60 fields/sec(30 frames/sec).

A Concept of Fuzzy Wavelets based on Rank Operators and Alpha-Bands

  • Nobuhara, Hajime;Hirota, Kaoru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.46-49
    • /
    • 2003
  • A concept of fuzzy wavelets is proposed by a fuzzification of morphological wavelets. In the proposed fuzzy wavelets, analysis and synthesis schemes can be formulated as the operations of fuzzy relational calculus. In order to perform an efficient compression and reconstruction, an alphaband is also proposed as a soft thresholding of the wavelets. In the image compression/reconstruction experiment using test images extracted Standard Image DataBAse (SIDBA), it is confirmed that the root mean square error (RMSE) of the proposed soft thresholding is decreased to 87.3% of the conventional hard thresholding.

  • PDF

Non-Iterative Threshold based Recovery Algorithm (NITRA) for Compressively Sensed Images and Videos

  • Poovathy, J. Florence Gnana;Radha, S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4160-4176
    • /
    • 2015
  • Data compression like image and video compression has come a long way since the introduction of Compressive Sensing (CS) which compresses sparse signals such as images, videos etc. to very few samples i.e. M < N measurements. At the receiver end, a robust and efficient recovery algorithm estimates the original image or video. Many prominent algorithms solve least squares problem (LSP) iteratively in order to reconstruct the signal hence consuming more processing time. In this paper non-iterative threshold based recovery algorithm (NITRA) is proposed for the recovery of images and videos without solving LSP, claiming reduced complexity and better reconstruction quality. The elapsed time for images and videos using NITRA is in ㎲ range which is 100 times less than other existing algorithms. The peak signal to noise ratio (PSNR) is above 30 dB, structural similarity (SSIM) and structural content (SC) are of 99%.