• Title/Summary/Keyword: entropy criterion

Search Result 31, Processing Time 0.024 seconds

Decision Feedback Equalizer Algorithms based on Error Entropy Criterion (오차 엔트로피 기준에 근거한 결정 궤환 등화 알고리듬)

  • Kim, Nam-Yong
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.27-33
    • /
    • 2011
  • For compensation of channel distortion from multipath fading and impulsive noise, a decision feedback equalizer (DFE) algorithm based on minimization of Error entropy (MEE) is proposed. The MEE criterion has not been studied for DFE structures and impulsive noise environments either. By minimizing the error entropy with respect to equalizer weight based on decision feedback structures, the proposed decision feedback algorithm has shown to have superior capability of residual intersymbol interference cancellation in simulation environments with severe multipath and impulsive noise.

A Study on the Surface Asperities Assessment by Fractal Analysis (프랙탈 해석을 이용한 표면 미세형상 평가 기법에 관한 연구)

  • 조남규
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.7 no.5
    • /
    • pp.7-14
    • /
    • 1998
  • In this paper, Fractal analysis applied to evaluate machined surface profile. The spectrum method was used to calculate fractal dimension of generated surface profiles by Weierstrass-Mandelbrot fractal function. To avoid estimation errors by low frequency characteristics of FFT, the Maximum Entropy Method (MEM) was examined. We suggest a new criterion to define the MEM order m. MEM power spectrum with our criterion is proved to be advantageous by the comparison with the experimental results.

  • PDF

A Study on the Minimum Error Entropy - related Criteria for Blind Equalization (블라인드 등화를 위한 최소 에러 엔트로피 성능기준들에 관한 연구)

  • Kim, Namyong;Kwon, Kihyun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.87-95
    • /
    • 2009
  • As information theoretic learning techniques, error entropy minimization criterion (MEE) and maximum cross correntropy criterion (MCC) have been studied in depth for supervised learning. MEE criterion leads to maximization of information potential and MCC criterion leads to maximization of cross correlation between output and input random processes. The weighted combination scheme of these two criteria, namely, minimization of Error Entropy with Fiducial points (MEEF) has been introduced and developed by many researchers. As an approach to unsupervised, blind channel equalization, we investigate the possibility of applying constant modulus error (CME) to MEE criterion and some problems of the method. Also we study on the application of CME to MEEF for blind equalization and find out that MEE-CME loses the information of the constant modulus. This leads MEE-CME and MEEF-CME not to converge or to converge slower than other algorithms dependent on the constant modulus.

  • PDF

MEC; A new decision tree generator based on multi-base entropy (다중 엔트로피를 기반으로 하는 새로운 결정 트리 생성기 MEC)

  • 전병환;김재희
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.3
    • /
    • pp.423-431
    • /
    • 1997
  • A new decision tree generator MEC is proposed in this paper, which uses the difference of multi-base entropy as a consistent criterion for discretization and selection of attributes. To evaluate the performance of the proposed generator, it is compared to other generators which use criteria based on entropy and adopt different discretization styles. As an experimental result, it is shown that the proposed generator produces the most efficient classifiers, which have the least number of leaves at the same error rate, regardless of whether attribute values constituting the training set are discrete or continuous.

  • PDF

An Adaptive Data Compression Algorithm for Video Data (사진데이타를 위한 한 Adaptive Data Compression 방법)

  • 김재균
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.12 no.2
    • /
    • pp.1-10
    • /
    • 1975
  • This paper presents an adaptive data compression algorithm for video data. The coling complexity due to the high correlation in the given data sequence is alleviated by coding the difference data, sequence rather than the data sequence itself. The adaptation to the nonstationary statistics of the data is confined within a code set, which consists of two constant length cades and six modified Shannon.Fano codes. lt is assumed that the probability distributions of tile difference data sequence and of the data entropy are Laplacian and Gaussion, respectively. The adaptive coding performance is compared for two code selection criteria: entropy and $P_r$[difference value=0]=$P_0$. It is shown that data compression ratio 2 : 1 is achievable with the adaptive coding. The gain by the adaptive coding over the fixed coding is shown to be about 10% in compression ratio and 15% in code efficiency. In addition, $P_0$ is found to he not only a convenient criterion for code selection, but also such efficient a parameter as to perform almost like entropy.

  • PDF

A Simple Stopping Criterion for the MIN-SUM Iterative Decoding Algorithm on SCCC and Turbo code (반복 복호의 계산량 감소를 위한 간단한 복호 중단 판정 알고리즘)

  • Heo, Jun;Chung, Kyu-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.4
    • /
    • pp.11-16
    • /
    • 2004
  • A simple stopping criterion for iterative decoding based on min-sum processing is presented. While most stopping criteria suggested in the literature, are based on Cross Entropy (CE) and its simplification, the proposed stopping criterion is to check if a decoded sequence is a valid codeword along the encoder trellis structure. This new stopping criterion requires less computational complexity and saves mem4)ry compared to the conventional stopping rules. The numerical results are presented on the 3GPP turbo code and a Serially Concatenated Convolutional Cods (SCCC).

Optimum Solutions of Minimum Error Entropy Algorithm (최소 오차 엔트로피 알고리듬의 최적해)

  • Kim, Namyong;Lee, Gyoo-yeong
    • Journal of Internet Computing and Services
    • /
    • v.17 no.3
    • /
    • pp.19-24
    • /
    • 2016
  • The minimum error entropy (MEE) algorithm is known to be superior in impulsive noise environment. In this paper, the optimum solutions and properties of the MEE algorithm are studied in regard to the robustness against impulsive noise. From the analysis of the behavior of optimum weight and factors related with mitigation of influence from large errors, it is revealed that the magnitude controlled input entropy plays the main role of keeping optimum weight of MEE undisturbed from impulsive noise. In the simulation, the optimum weight of MEE is shown to be the same as that of MSE criterion.

Comparison of Objective Functions for Feed-forward Neural Network Classifiers Using Receiver Operating Characteristics Graph

  • Oh, Sang-Hoon;Wakuya, Hiroshi
    • International Journal of Contents
    • /
    • v.10 no.1
    • /
    • pp.23-28
    • /
    • 2014
  • When developing a classifier using various objective functions, it is important to compare the performances of the classifiers. Although there are statistical analyses of objective functions for classifiers, simulation results can provide us with direct comparison results and in this case, a comparison criterion is considerably critical. A Receiver Operating Characteristics (ROC) graph is a simulation technique for comparing classifiers and selecting a better one based on a performance. In this paper, we adopt the ROC graph to compare classifiers trained by mean-squared error, cross-entropy error, classification figure of merit, and the n-th order extension of cross-entropy error functions. After the training of feed-forward neural networks using the CEDAR database, the ROC graphs are plotted to help us identify which objective function is better.

Neural Network Modeling for the Superheated, Saturated and Compressed Region of Steam Table (증기표의 과열, 포화 및 압축영역의 신경회로망 모델링)

  • Lee, Tae-Hwan;Park, Jin-Hyun
    • Journal of the Korean Society of Mechanical Technology
    • /
    • v.20 no.6
    • /
    • pp.872-878
    • /
    • 2018
  • Steam tables including superheated, saturated and compressed region were simultaneously modeled using the neural networks. Pressure and temperature were used as two inputs for superheated and compressed region. On the other hand Pressure and dryness fraction were two inputs for saturated region. The outputs were specific volume, specific enthalpy and specific entropy. The neural network model were compared with the linear interpolation model in terms of the percentage relative errors. The criterion of judgement was selected with the percentage relative error of 1%. In conclusion the neural networks showed better results than the interpolation method for all data of superheated and compressed region and specific volume of saturated region, but similar for specific enthalpy and entropy of saturated region.

A Study on Incremental Learning Model for Naive Bayes Text Classifier (Naive Bayes 문서 분류기를 위한 점진적 학습 모델 연구)

  • 김제욱;김한준;이상구
    • The Journal of Information Technology and Database
    • /
    • v.8 no.1
    • /
    • pp.95-104
    • /
    • 2001
  • In the text classification domain, labeling the training documents is an expensive process because it requires human expertise and is a tedious, time-consuming task. Therefore, it is important to reduce the manual labeling of training documents while improving the text classifier. Selective sampling, a form of active learning, reduces the number of training documents that needs to be labeled by examining the unlabeled documents and selecting the most informative ones for manual labeling. We apply this methodology to Naive Bayes, a text classifier renowned as a successful method in text classification. One of the most important issues in selective sampling is to determine the criterion when selecting the training documents from the large pool of unlabeled documents. In this paper, we propose two measures that would determine this criterion : the Mean Absolute Deviation (MAD) and the entropy measure. The experimental results, using Renters 21578 corpus, show that this proposed learning method improves Naive Bayes text classifier more than the existing ones.

  • PDF