• Title/Summary/Keyword: Kullback-Leibler information measure

Search Result 12, Processing Time 0.017 seconds

Analysis of Large Tables (대규모 분할표 분석)

  • Choi, Hyun-Jip
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.2
    • /
    • pp.395-410
    • /
    • 2005
  • For the analysis of large tables formed by many categorical variables, we suggest a method to group the variables into several disjoint groups in which the variables are completely associated within the groups. We use a simple function of Kullback-Leibler divergence as a similarity measure to find the groups. Since the groups are complete hierarchical sets, we can identify the association structure of the large tables by the marginal log-linear models. Examples are introduced to illustrate the suggested method.

A View on Extension of Utility-Based on Links with Information Measures

  • Hoseinzadeh, A.R.;Borzadaran, G.R.Mohtashami;Yari, G.H.
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.5
    • /
    • pp.813-820
    • /
    • 2009
  • In this paper, we review the utility-based generalization of the Shannon entropy and Kullback-Leibler information measure as the U-entropy and the U-relative entropy that was introduced by Friedman et al. (2007). Then, we derive some relations between the U-relative entropy and other information measures based on a parametric family of utility functions.

Generalized Kullback-Leibler information and its extensions to censored and discrete cases

  • Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.6
    • /
    • pp.1223-1229
    • /
    • 2012
  • In this paper, we propose a generalized Kullback-Leibler (KL) information for measuring the distance between two distribution functions where the extension to the censored case is immediate. The generalized KL information has the nonnegativity and characterization properties, and its censored version has the additional property of monotonic increase. We also extend the discussion to the discrete case and propose a generalized censored measure which is comparable to Pearson's chi-square statistic.

Kullback-Leibler Information of the Equilibrium Distribution Function and its Application to Goodness of Fit Test

  • Park, Sangun;Choi, Dongseok;Jung, Sangah
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.2
    • /
    • pp.125-134
    • /
    • 2014
  • Kullback-Leibler (KL) information is a measure of discrepancy between two probability density functions. However, several nonparametric density function estimators have been considered in estimating KL information because KL information is not well-defined on the empirical distribution function. In this paper, we consider the KL information of the equilibrium distribution function, which is well defined on the empirical distribution function (EDF), and propose an EDF-based goodness of fit test statistic. We evaluate the performance of the proposed test statistic for an exponential distribution with Monte Carlo simulation. We also extend the discussion to the censored case.

On Information Theoretic Index for Measuring the Stochastic Dependence Among Sets of Variates

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.1
    • /
    • pp.131-146
    • /
    • 1997
  • In this paper the problem of measuring the stochastic dependence among sets fo random variates is considered, and attention is specifically directed to forming a single well-defined measure of the dependence among sets of normal variates. A new information theoretic measure of the dependence called dependence index (DI) is introduced and its several properties are studied. The development of DI is based on the generalization and normalization of the mutual information introduced by Kullback(1968). For data analysis, minimum cross entropy estimator of DI is suggested, and its asymptotic distribution is obtained for testing the existence of the dependence. Monte Carlo simulations demonstrate the performance of the estimator, and show that is is useful not only for evaluation of the dependence, but also for independent model testing.

  • PDF

Image Restoration Algorithms by using Fisher Information (피셔 인포메이션을 이용한 영상 복원 알고리즘)

  • 오춘석;이현민;신승중;유영기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.89-97
    • /
    • 2004
  • An object to reflect or emit light is captured by imaging system as distorted image due to various distortion. It is called image restoration that estimates original object by removing distortion. There are two categories in image restoration method. One is a deterministic method and the other is a stochastic method. In this paper, image restoration using Minimum Fisher Information(MFI), derived from B. Roy Frieden is proposed. In MFI restoration, experimental results to be made according to noise control parameter were investigated. And cross entropy(Kullback-Leibler entropy) was used as a standard measure of restoration accuracy, It is confirmed that restoration results using MFI have various roughness according to noise control parameter.

Clustering based object feature matching for multi-camera system (멀티 카메라 연동을 위한 군집화 기반의 객체 특징 정합)

  • Kim, Hyun-Soo;Kim, Gyeong-Hwan
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.915-916
    • /
    • 2008
  • We propose a clustering based object feature matching for identification of same object in multi-camera system. The method is focused on ease to system initialization and extension. Clustering is used to estimate parameters of Gaussian mixture models of objects. A similarity measure between models are determined by Kullback-Leibler divergence. This method can be applied to occlusion problem in tracking.

  • PDF

MEASURE OF DEPARTURE FROM QUASI-SYMMETRY AND BRADLEY-TERRY MODELS FOR SQUARE CONTINGENCY TABLES WITH NOMINAL CATEGORIES

  • Kouji Tahata;Nobuko Miyamoto;Sadao Tomizawa
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.1
    • /
    • pp.129-147
    • /
    • 2004
  • For square contingency tables with nominal categories, this paper proposes a measure to represent the degree of departure from the quasi-symmetry (QS) model and the Bradley-Terry (BT) model. The measure proposed is expressed by using the Cressie and Read (1984)'s power-divergence or Patil and Taillie (1982)'s diversity index. The measure lies between 0 and 1, and it is useful for comparing the degree of departure from QS or BT in several tables.

Generalized Measure of Departure From Global Symmetry for Square Contingency Tables with Ordered Categories

  • Tomizawa, Sadao;Saitoh, Kayo
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.3
    • /
    • pp.289-303
    • /
    • 1998
  • For square contingency tables with ordered categories, Tomizawa (1995) considered two kinds of measures to represent the degree of departure from global symmetry, which means that the probability that an observation will fall in one of cells in the upper-right triangle of square table is equal to the probability that the observation falls in one of cells in the lower-left triangle of it. This paper proposes a generalization of those measures. The proposed measure is expressed by using Cressie and Read's (1984) power divergence or Patil and Taillie's (1982) diversity index. Special cases of the proposed measure include TomiBawa's measures. The proposed measure would be useful for comparing the degree of departure from global symmetry in several tables.

  • PDF

Direct Divergence Approximation between Probability Distributions and Its Applications in Machine Learning

  • Sugiyama, Masashi;Liu, Song;du Plessis, Marthinus Christoffel;Yamanaka, Masao;Yamada, Makoto;Suzuki, Taiji;Kanamori, Takafumi
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.99-111
    • /
    • 2013
  • Approximating a divergence between two probability distributions from their samples is a fundamental challenge in statistics, information theory, and machine learning. A divergence approximator can be used for various purposes, such as two-sample homogeneity testing, change-point detection, and class-balance estimation. Furthermore, an approximator of a divergence between the joint distribution and the product of marginals can be used for independence testing, which has a wide range of applications, including feature selection and extraction, clustering, object matching, independent component analysis, and causal direction estimation. In this paper, we review recent advances in divergence approximation. Our emphasis is that directly approximating the divergence without estimating probability distributions is more sensible than a naive two-step approach of first estimating probability distributions and then approximating the divergence. Furthermore, despite the overwhelming popularity of the Kullback-Leibler divergence as a divergence measure, we argue that alternatives such as the Pearson divergence, the relative Pearson divergence, and the $L^2$-distance are more useful in practice because of their computationally efficient approximability, high numerical stability, and superior robustness against outliers.