• Title/Summary/Keyword: Quantization Level

Search Result 171, Processing Time 0.023 seconds

Numerical analysis of quantization-based optimization

  • Jinwuk Seok;Chang Sik Cho
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.367-378
    • /
    • 2024
  • We propose a number-theory-based quantized mathematical optimization scheme for various NP-hard and similar problems. Conventional global optimization schemes, such as simulated and quantum annealing, assume stochastic properties that require multiple attempts. Although our quantization-based optimization proposal also depends on stochastic features (i.e., the white-noise hypothesis), it provides a more reliable optimization performance. Our numerical analysis equates quantization-based optimization to quantum annealing, and its quantization property effectively provides global optimization by decreasing the measure of the level sets associated with the objective function. Consequently, the proposed combinatorial optimization method allows the removal of the acceptance probability used in conventional heuristic algorithms to provide a more effective optimization. Numerical experiments show that the proposed algorithm determines the global optimum in less operational time than conventional schemes.

An Adaptive BTC Algorithm Using the Characteristics of th Error Signals for Efficient Image Compression (차신호 특성을 이용한 효율적인 적응적 BTC 영상 압축 알고리듬)

  • 이상운;임인칠
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.4
    • /
    • pp.25-32
    • /
    • 1997
  • In this paper, we propose an adaptive BTC algorithm using the characteristics of the error signals. The BTC algorithm has a avantage that it is low computational complexity, but a disadvantage that it produces the ragged edges in the reconstructed images for th esloping regions beause of coding the input with 2-level signals. Firstly, proposed methods classify the input into low, medium, and high activity blocks based on the variance of th einput. Using 1-level quantizer for low activity block, 2-level for medium, and 4-level for high, it is adaptive methods that reduce bit rates and the inherent quantization noises in the 2-level quantizer. Also, in case of processing high activity block, we propose a new quantization level allocation algorithm using the characteristics of the error signals between the original signals and the reconstructed signals used by 2-level quantizer, in oder that reduce bit rates superior to the conventional 4-level quantizer. Especially, considering the characteristics of input block, we reduce the bit rates without incurrng the visual noises.

  • PDF

Error Analysis for Optical Security by means of 4-Step Phase-Shifting Digital Holography

  • Lee, Hyun-Jin;Gil, Sang-Keun
    • Journal of the Optical Society of Korea
    • /
    • v.10 no.3
    • /
    • pp.118-123
    • /
    • 2006
  • We present an optical security method for binary data information by using 4-step phase-shifting digital holography and we analyze tolerance error for the decrypted data. 4-step phase-shifting digital holograms are acquired by moving the PZT mirror with equidistant phase steps of ${\pi}/2$ in the Mach-Zender type interferometer. The digital hologram in this method is a Fourier transform hologram and is quantized with 256 gray level. The decryption performance of the binary data information is analyzed. One of the most important errors is the quantization error in detecting the hologram intensity on CCD. The greater the number of quantization error pixels and the variation of gray level increase, the more the number of error bits increases for decryption. Computer experiments show the results for encryption and decryption with the proposed method and show the graph to analyze the tolerance of the quantization error in the system.

2-step Phase-shifting Digital Holographic Optical Encryption and Error Analysis

  • Jeon, Seok-Hee;Gil, Sang-Keun
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.3
    • /
    • pp.244-251
    • /
    • 2011
  • We propose a new 2-step phase-shifting digital holographic optical encryption technique and analyze tolerance error for this cipher system. 2-step phase-shifting digital holograms are acquired by moving the PZT mirror with phase step of 0 or ${\pi}$/2 in the reference beam path of the Mach-Zehnder type interferometer. Digital hologram with the encrypted information is Fourier transform hologram and is recorded on CCD camera with 256 gray-level quantized intensities. The decryption performance of binary bit data and image data is analyzed by considering error factors. One of the most important errors is quantization error in detecting the digital hologram intensity on CCD. The more the number of quantization error pixels and the variation of gray-level increase, the more the number of error bits increases for decryption. Computer experiments show the results to be carried out encryption and decryption with the proposed method and the graph to analyze the tolerance of the quantization error in the system.

Reversible Data Hiding in Block Truncation Coding Compressed Images Using Quantization Level Swapping and Shifting

  • Hong, Wien;Zheng, Shuozhen;Chen, Tung-Shou;Huang, Chien-Che
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.6
    • /
    • pp.2817-2834
    • /
    • 2016
  • The existing reversible data hiding methods for block truncation coding (BTC) compressed images often utilize difference expansion or histogram shifting technique for data embedment. Although these methods effectively embed data into the compressed codes, the embedding operations may swap the numerical order of the higher and lower quantization levels. Since the numerical order of these two quantization levels can be exploited to carry additional data without destroying the quality of decoded image, the existing methods cannot take the advantages of this property to embed data more efficiently. In this paper, we embed data by shifting the higher and lower quantization levels in opposite direction. Because the embedment does not change numerical order of quantization levels, we exploit this property to carry additional data without further reducing the image quality. The proposed method performs no-distortion embedding if the payload is small, and performs reversible data embedding for large payload. The experimental results show that the proposed method offers better embedding performance over prior works in terms of payload and image quality.

The Binary Tree Vector Quantization Using Human Visual Properties (인간의 시각 특성을 이용한 이진 트리 벡터 양자화)

  • 유성필;곽내정;박원배;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.3
    • /
    • pp.429-435
    • /
    • 2003
  • In this paper, we propose improved binary tree vector quantization with consideration of spatial sensitivity which is one of the human visual properties. We combine weights in consideration with the responsibility of human visual system according to changes of three primary color in blocks of images with the process of splitting nodes using eigenvector in binary tree vector quantization. Also we propose the novel quality measure of the quantization images that applies MTF(modulation transfer function) to luminance value of quantization error of color image. The test results show that the proposed method generates the quantized images with fine color and performs better than the conventional method in terms of clustering the similar regions. Also the proposed method can get less quantized level images and can reduce the resource occupied by the quantized image.

  • PDF

A Quantizer Reconstruction Level Control Method for Block Artifact Reduction in DCT Image Coding (양자화 재생레벨 조정을 통한 DCT 영상 코오딩에서의 블록화 현상 감소 방법)

  • 김종훈;황찬식;심영석
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.5
    • /
    • pp.318-326
    • /
    • 1991
  • A Quantizer reconstruction level control method for block artifact reduction in DCT image coding is described. In our scheme, quantizer reconstruction level control is obtained by adding quantization level step size to the optimum quantization level in the direction of reducing the block artifact by minimizing the mean square error(MSE) and error difference(EDF) distribution in boundary without the other additional bits. In simulation results, although the performance in terms of signal to noise ratio is degraded by a little amount, mean square of error difference at block boundary and mean square error having relation block artifact is greatly reduced. Subjective image qualities are improved compared with other block artifact reduction method such as postprocessing by filtering and trasform coding by block overlapping. But the addition calculations of 1-dimensional DCT become to be more necessary to coding process for determining the reconstruction level.

  • PDF

Identifying Friendly and Foe Using a Watermarking Technique During Military Communication (군 통신상에서 워터마킹 기술을 이용한 피아식별 방법)

  • Lee, Jong-Kwan;Choi, Hyun-Joo
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.9 no.4
    • /
    • pp.81-89
    • /
    • 2006
  • In this paper, a watermark technique for identifying friendly and foe is proposed during communication. The speech signal is processed in several stages. First, speech signal is partitioned into small time frames and the frames are transformed into frequency domain using DFT(Discrete Frequency Transform). The DFT coefficients are quantized and the watermark signal is embedded into the quantized DFT coefficients. At the destination channel quantization errors of received signal are regarded as the watermark signal. Identification of friendly and foe are done by correlating the detected watermark and the original watermark. As in most other watermark techniques, this method has a trade off between noise robustness and quality. However, this is solved by a partial quantization and a noise level dependent quantization step. Simulation results in the various noisy environments show that the proposed method is reliable for identification between friendly and foe.

Secret Data Communication Method using Quantization of Wavelet Coefficients during Speech Communication (음성통신 중 웨이브렛 계수 양자화를 이용한 비밀정보 통신 방법)

  • Lee, Jong-Kwan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10d
    • /
    • pp.302-305
    • /
    • 2006
  • In this paper, we have proposed a novel method using quantization of wavelet coefficients for secret data communication. First, speech signal is partitioned into small time frames and the frames are transformed into frequency domain using a WT(Wavelet Transform). We quantize the wavelet coefficients and embedded secret data into the quantized wavelet coefficients. The destination regard quantization errors of received speech as seceret dat. As most speech watermark techniques have a trade off between noise robustness and speech quality, our method also have. However we solve the problem with a partial quantization and a noise level dependent threshold. In additional, we improve the speech quality with de-noising method using wavelet transform. Since the signal is processed in the wavelet domain, we can easily adapt the de-noising method based on wavelet transform. Simulation results in the various noisy environments show that the proposed method is reliable for secret communication.

  • PDF

Automatic Music Summarization Using Similarity Measure Based on Multi-Level Vector Quantization (다중레벨 벡터양자화 기반의 유사도를 이용한 자동 음악요약)

  • Kim, Sung-Tak;Kim, Sang-Ho;Kim, Hoi-Rin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.2E
    • /
    • pp.39-43
    • /
    • 2007
  • Music summarization refers to a technique which automatically extracts the most important and representative segments in music content. In this paper, we propose and evaluate a technique which provides the repeated part in music content as music summary. For extracting a repeated segment in music content, the proposed algorithm uses the weighted sum of similarity measures based on multi-level vector quantization for fixed-length summary or optimal-length summary. For similarity measures, count-based similarity measure and distance-based similarity measure are proposed. The number of the same codeword and the Mahalanobis distance of features which have same codeword at the same position in segments are used for count-based and distance-based similarity measure, respectively. Fixed-length music summary is evaluated by measuring the overlapping ratio between hand-made repeated parts and automatically generated ones. Optimal-length music summary is evaluated by calculating how much automatically generated music summary includes repeated parts of the music content. From experiments we observed that optimal-length summary could capture the repeated parts in music content more effectively in terms of summary length than fixed-length summary.