• 제목/요약/키워드: Hellinger Divergence

검색결과 12건 처리시간 0.024초

SOME INEQUALITIES FOR THE $CSISZ{\acute{A}}R\;{\Phi}-DIVERGENCE$

  • Dragomir, S.S.
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • 제7권1호
    • /
    • pp.63-77
    • /
    • 2003
  • Some inequalities for the $Csisz{\acute{a}}r\;{\Phi}-divergence$ and applications for the Kullback-Leibler, $R{\acute{e}}nyi$, Hellinger and Bhattacharyya distances in Information Theory are given.

  • PDF

Minimum Hellinger Distance Estimation and Minimum Density Power Divergence Estimation in Estimating Mixture Proportions

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.1159-1165
    • /
    • 2005
  • Basu et al. (1998) proposed a new density-based estimator, called the minimum density power divergence estimator (MDPDE), which avoid the use of nonparametric density estimation and associated complication such as bandwidth selection. Woodward et al. (1995) examined the minimum Hellinger distance estimator (MHDE), proposed by Beran (1977), in the case of estimation of the mixture proportion in the mixture of two normals. In this article, we introduce the MDPDE for a mixture proportion, and show that both the MDPDE and the MHDE have the same asymptotic distribution at a model. Simulation study identifies some cases where the MHDE is consistently better than the MDPDE in terms of bias.

  • PDF

Empirical Comparisons of Disparity Measures for Partial Association Models in Three Dimensional Contingency Tables

  • Jeong, D.B.;Hong, C.S.;Yoon, S.H.
    • Communications for Statistical Applications and Methods
    • /
    • 제10권1호
    • /
    • pp.135-144
    • /
    • 2003
  • This work is concerned with comparison of the recently developed disparity measures for the partial association model in three dimensional categorical data. Data are generated by using simulation on each term in the log-linear model equation based on the partial association model, which is a proposed method in this paper. This alternative Monte Carlo methods are explored to study the behavior of disparity measures such as the power divergence statistic I(λ), the Pearson chi-square statistic X$^2$, the likelihood ratio statistic G$^2$, the blended weight chi-square statistic BWCS(λ), the blended weight Hellinger distance statistic BWHD(λ), and the negative exponential disparity statistic NED(λ) for moderate sample sizes. We find that the power divergence statistic I(2/3) and the blended weight Hellinger distance family BWHD(1/9) are the best tests with respect to size and power.

The Minimum Squared Distance Estimator and the Minimum Density Power Divergence Estimator

  • Pak, Ro-Jin
    • Communications for Statistical Applications and Methods
    • /
    • 제16권6호
    • /
    • pp.989-995
    • /
    • 2009
  • Basu et al. (1998) proposed the minimum divergence estimating method which is free from using the painful kernel density estimator. Their proposed class of density power divergences is indexed by a single parameter $\alpha$ which controls the trade-off between robustness and efficiency. In this article, (1) we introduce a new large class the minimum squared distance which includes from the minimum Hellinger distance to the minimum $L_2$ distance. We also show that under certain conditions both the minimum density power divergence estimator(MDPDE) and the minimum squared distance estimator(MSDE) are asymptotically equivalent and (2) in finite samples the MDPDE performs better than the MSDE in general but there are some cases where the MSDE performs better than the MDPDE when estimating a location parameter or a proportion of mixed distributions.

귀납법칙 학습과 개체위주 학습의 결합방법 (A Combined Method of Rule Induction Learning and Instance-Based Learning)

  • 이창환
    • 한국정보처리학회논문지
    • /
    • 제4권9호
    • /
    • pp.2299-2308
    • /
    • 1997
  • 대부분의 기계학습 방법들은 특정한 방법을 중심으로 연구되어 왔다. 하지만 두 가지 이상의 기계학습방법을 효과적으로 통합할 수 있는 방법에 대한 요구가 증가하며, 이에 따라 본 논문은 귀납법칙 (rule induction) 방법과 개체위주 학습방법 (instance-based learning)을 통합하는 시스템의 개발을 제시한다. 귀납법칙 단계에서는 엔트로피 함수의 일종인 Hellinger 변량을 사용하여 귀납법칙을 자동 생성하는 방법을 보이고, 개체위주 학습방법에서는 기존의 알고리즘의 단점을 보완한 새로운 개체위주 학습방법을 제시한다. 개발된 시스템은 여러 종류의 데이터에 의해 실험되었으며 다른 기계학습 방법과 비교되었다.

  • PDF

Bayesian Model Selection in the Unbalanced Random Effect Model

  • Kim, Dal-Ho;Kang, Sang-Gil;Lee, Woo-Dong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권4호
    • /
    • pp.743-752
    • /
    • 2004
  • In this paper, we develop the Bayesian model selection procedure using the reference prior for comparing two nested model such as the independent and intraclass models using the distance or divergence between the two as the basis of comparison. A suitable criterion for this is the power divergence measure as introduced by Cressie and Read(1984). Such a measure includes the Kullback -Liebler divergence measures and the Hellinger divergence measure as special cases. For this problem, the power divergence measure turns out to be a function solely of $\rho$, the intraclass correlation coefficient. Also, this function is convex, and the minimum is attained at $\rho=0$. We use reference prior for $\rho$. Due to the duality between hypothesis tests and set estimation, the hypothesis testing problem can also be solved by solving a corresponding set estimation problem. The present paper develops Bayesian method based on the Kullback-Liebler and Hellinger divergence measures, rejecting $H_0:\rho=0$ when the specified divergence measure exceeds some number d. This number d is so chosen that the resulting credible interval for the divergence measure has specified coverage probability $1-{\alpha}$. The length of such an interval is compared with the equal two-tailed credible interval and the HPD credible interval for $\rho$ with the same coverage probability which can also be inverted into acceptance regions of $H_0:\rho=0$. Example is considered where the HPD interval based on the one-at- a-time reference prior turns out to be the shortest credible interval having the same coverage probability.

  • PDF

Empirical Comparisons of Disparity Measures for Three Dimensional Log-Linear Models

  • Park, Y.S.;Hong, C.S.;Jeong, D.B.
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권2호
    • /
    • pp.543-557
    • /
    • 2006
  • This paper is concerned with the applicability of the chi-square approximation to the six disparity statistics: the Pearson chi-square, the generalized likelihood ratio, the power divergence, the blended weight chi-square, the blended weight Hellinger distance, and the negative exponential disparity statistic. Three dimensional contingency tables of small and moderate sample sizes are generated to be fitted to all possible hierarchical log-linear models: the completely independent model, the conditionally independent model, the partial association models, and the model with one variable independent of the other two. For models with direct solutions of expected cell counts, point estimates and confidence intervals of the 90 and 95 percentage points of six statistics are explored. For model without direct solutions, the empirical significant levels and the empirical powers of six statistics to test the significance of the three factor interaction are computed and compared.

  • PDF

The Estimating Equations Induced from the Minimum Dstance Estimation

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권3호
    • /
    • pp.687-696
    • /
    • 2003
  • This article presents a new family of the estimating functions related with minimum distance estimations, and discusses its relationship to the family of the minimum density power divergence estimating equations. Two representative minimum distance estimations; the minimum $L_2$ distance estimation and the minimum Hellinger distance estimation are studied in the light of the theory of estimating equations. Despite of the desirable properties of minimum distance estimations, they are not widely used by general researchers, because theories related with them are complex and are hard to be computationally implemented in real problems. Hopefully, this article would be a help for understanding the minimum distance estimations better.

  • PDF

연관성 방향을 고려한 부호 헬링거 측도의 제안 (Signed Hellinger measure for directional association)

  • 박희창
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권2호
    • /
    • pp.353-362
    • /
    • 2016
  • 데이터 마이닝은 빅 데이터에 내재되어 있는 새로운 법칙이나 잠재되어 있는 지식을 탐색한 후, 이를 근거로 하여 의사결정에 활용하고자 하는 것이다. 위키 백과사전에 의하면 데이터 마이닝 기법 중의 하나인 연관성 규칙은 연관성 평가 기준에 의해 관심 있는 항목들 간에 관련성을 찾아내는 기법으로 많은 연구자들에 의해 연관성 평가를 위한 흥미도 측도들이 개발되어 왔다. 이들 중에서 헬링거 측도는 여러 가지 흥미도 측도들에 비해 많은 장점이 있으나 연관성의 방향을 판단하기가 곤란한 문제를 내포하고 있다. 이 문제를 해결하기 위해 본 논문에서는 부호를 가지는 헬링거 측도를 제안하고 몇 가지 예제를 통하여 유용성을 고찰하였다. 그 결과, 본 논문에서 제안하는 부호 헬링거 측도는 양의 연관성을 가지는 경우에는 양의 값으로 나타나고 음의 연관성을 가지는 경우에는 음의 값을 갖는 것으로 나타났다. 또한 동시발생빈도, 동시 비 발생빈도, 그리고 불일치 빈도가 증가함에 따라 기본적인 연관성 평가 기준들과 부호 헬링거 측도는 증감 여부가 동일한 것을 알 수 있었다.

정보이론에 기반한 연관 규칙들의 새로운 중요도 측정 방법 (A New Importance Measure of Association Rules Using Information Theory)

  • 이창환;배주현
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제3권1호
    • /
    • pp.37-42
    • /
    • 2014
  • 연관 규칙들을 이용한 분류학습은 최근 활발히 연구되는 분야의 하나이다. 이러한 연관 규칙을 이용한 분류에는 연관 규칙들에 대한 수치적 중요도를 계산하는 것이 중요하다. 본 논문에서는 정보 이론을 사용한 H measure 라는 새로운 규칙 중요도 기법을 제안한다. 구체적으로 Hellinger 변량을 이용하여 연관규칙의 중요도를 계산한다. 제안된 H measure 의 다양한 특성들을 분석하였으며 또한 이러한 H measure를 이용한 분류학습의 성능을 다른 규칙 measure를 이용한 분류학습의 성능과 비교하였다.