• Title/Summary/Keyword: quantizing

Search Result 46, Processing Time 0.02 seconds

A Performance Evaluation of QE-MMA Adaptive Equalization Algorithm by Quantizer Bit Number (양자화기 비트수에 의한 QE-MMA 적응 등화 알고리즘 성능 평가)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.57-62
    • /
    • 2019
  • This paper evaluates the QE-MMA (Quantized Error-MMA) adaptive equalization algorithm by the number of quantizer in order to compensates the intersymbol interference due to channel in the transmission of high spectral efficient nonconstant modulus signal. In the adaptive equalizer, the error signal is needed for the updating the tap coefficient, the QE-MMA uses the polarity of error signal and correlation multiplier that condered nonlinear finite bit power-of-two quantizing component in order to convinience of H/W implementation. The different adaptive equalization performance were obtained by the number of quantizer, these performance were evaluated by the computer simulation. For this, the equalizer output signal constellation, residual isi, maximum distortion, MSE, SER were applied as a performance index. As a result of computer simulation, it improved equalization performance and reduced equalization noise were obtained in the steady state by using large quantizer bit numbers, but gives slow in convergence speed for reaching steady state.

HS Implementation Based on Music Scale (음계를 기반으로 한 HS 구현)

  • Lee, Tae-Bong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.5
    • /
    • pp.299-307
    • /
    • 2022
  • Harmony Search (HS) is a relatively recently developed meta-heuristic optimization algorithm, and various studies have been conducted on it. HS is based on the musician's improvisational performance, and the objective variables play the role of the instrument. However, each instrument is given only a sound range, and there is no concept of a scale that can be said to be the basis of music. In this study, the performance of the algorithm is improved by introducing a scale to the existing HS and quantizing the bandwidth. The introduced scale was applied to HM initialization instead of the existing method that was randomly initialized in the sound band. The quantization step can be set arbitrarily, and through this, a relatively large bandwidth is used at the beginning of the algorithm to improve the exploration of the algorithm, and a small bandwidth is used to improve the exploitation in the second half. Through the introduction of scale and bandwidth quantization, it was possible to reduce the algorithm performance deviation due to the initial value and improve the algorithm convergence speed and success rate compared to the existing HS. The results of this study were confirmed by comparing examples of optimization values for various functions with the conventional method. Specific comparative values were described in the simulation.

A Novel Distributed Secret Key Extraction Technique for Wireless Network (무선 네트워크를 위한 분산형 비밀 키 추출 방식)

  • Im, Sanghun;Jeon, Hyungsuk;Ha, Jeongseok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.12
    • /
    • pp.708-717
    • /
    • 2014
  • In this paper, we present a secret key distribution protocol without resorting to a key management infrastructure targeting at providing a low-complexity distributed solution to wireless network. The proposed scheme extracts a secret key from the random fluctuation of wireless channels. By exploiting time division duplexing transmission, two legitimate users, Alice and Bob can have highly correlated channel gains due to channel reciprocity, and a pair of random bit sequences can be generated by quantizing the channel gains. We propose a novel adaptive quantization scheme that adjusts quantization thresholds according to channel variations and reduces the mismatch probability between generated bit sequences by Alice and Bob. BCH codes, as a low-complexity and pratical approach, are also employed to correct the mismatches between the pair of bit sequences and produce a secret key shared by Alice and Bob. To maximize the secret key extraction rate, the parameters, quantization levels and code rates of BCH codes are jointly optimized.

Signatures Verification by Using Nonlinear Quantization Histogram Based on Polar Coordinate of Multidimensional Adjacent Pixel Intensity Difference (다차원 인접화소 간 명암차의 극좌표 기반 비선형 양자화 히스토그램에 의한 서명인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.5
    • /
    • pp.375-382
    • /
    • 2016
  • In this paper, we presents a signatures verification by using the nonlinear quantization histogram of polar coordinate based on multi-dimensional adjacent pixel intensity difference. The multi-dimensional adjacent pixel intensity difference is calculated from an intensity difference between a pair of pixels in a horizontal, vertical, diagonal, and opposite diagonal directions centering around the reference pixel. The polar coordinate is converted from the rectangular coordinate by making a pair of horizontal and vertical difference, and diagonal and opposite diagonal difference, respectively. The nonlinear quantization histogram is also calculated from nonuniformly quantizing the polar coordinate value by using the Lloyd algorithm, which is the recursive method. The polar coordinate histogram of 4-directional intensity difference is applied not only for more considering the corelation between pixels but also for reducing the calculation load by decreasing the number of histogram. The nonlinear quantization is also applied not only to still more reflect an attribute of intensity variations between pixels but also to obtain the low level histogram. The proposed method has been applied to verified 90(3 persons * 30 signatures/person) images of 256*256 pixels based on a matching measures of city-block, Euclidean, ordinal value, and normalized cross-correlation coefficient. The experimental results show that the proposed method has a superior to the linear quantization histogram, and Euclidean distance is also the optimal matching measure.

Adaptive Irregular Binning and Its Application to Video Coding Scheme Using Iterative Decoding (적응 불규칙 양자화와 반복 복호를 이용한 비디오 코딩 방식에의 응용)

  • Choi Kang-Sun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.391-399
    • /
    • 2006
  • We propose a novel low complexity video encoder, at the expense of a complex decoder, where video frames are intra-coded periodically and frames in between successive intra-coded frames are coded efficiently using a proposed irregular binning technique. We investigate a method of forming an irregular binning which is capable of quantizing any value effectively with only small number of bins, by exploiting the correlation between successive frames. This correlation is additionally exploited at the decoder, where the quality of reconstructed frames is enhanced gradually by applying POCS(projection on the convex sets). After an image frame is reconstructed with the irregular binning information at the proposed decoder, we can further improve the resulting quality by modifying the reconstructed image with motion-compensated image components from the neighboring frames which are considered to contain image details. In the proposed decoder, several iterations of these modification and re-projection steps can be invoked. Experimental results show that the performance of the proposed coding scheme is comparable to that of H.264/AVC coding in m mode. Since the proposed video coding does not require motion estimation at the encoder, it can be considered as an alternative for some versions of H.264/AVC in applications requiring a simple encoder.

Video Compression using Characteristics of Wavelet Coefficients (웨이브렛 계수의 특성을 이용한 비디오 영상 압축)

  • 문종현;방만원
    • Journal of Broadcast Engineering
    • /
    • v.7 no.1
    • /
    • pp.45-54
    • /
    • 2002
  • This paper proposes a video compression algorithm using characteristics of wavelet coefficients. The proposed algorithm can provide lowed bit rate and faster running time while guaranteeing the reconstructed image qualify by the human virtual system. In this approach, each video sequence is decomposed into a pyramid structure of subimages with various resolution to use multiresolution capability of discrete wavelet transform. Then similarities between two neighboring frames are obtained from a low-frequency subband which Includes an important information of an image and motion informations are extracted from the similarity criteria. Four legion selection filters are designed according to the similarity criteria and compression processes are carried out by encoding the coefficients In preservation legions and replacement regions of high-frequency subbands. Region selection filters classify the high-frequency subbands Into preservation regions and replacement regions based on the similarity criteria and the coefficients In replacement regions are replaced by that of a reference frame or reduced to zero according to block-based similarities between a reference frame and successive frames. Encoding is carried out by quantizing and arithmetic encoding the wavelet coefficients in preservation regions and replacement regions separately. A reference frame is updated at the bottom point If the curve of similarity rates looks like concave pattern. Simulation results show that the proposed algorithm provides high compression ratio with proper Image quality. It also outperforms the previous Milton's algorithm in an Image quality, compression ratio and running time, leading to compression ratio less than 0.2bpp. PSNR of 32 dB and running tome of 10ms for a standard video image of size 352${\times}$240 pixels.