• Title/Summary/Keyword: Kullback-Leibler cross entropy

Search Result 5, Processing Time 0.017 seconds

Kullback-Leibler Information of Consecutive Order Statistics

  • Kim, Ilmun;Park, Sangun
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.487-494
    • /
    • 2015
  • A calculation of the Kullback-Leibler information of consecutive order statistics is complicated because it depends on a multi-dimensional integral. Park (2014) discussed a representation of the Kullback-Leibler information of the first r order statistics in terms of the hazard function and simplified the r-fold integral to a single integral. In this paper, we first express the Kullback-Leibler information in terms of the reversed hazard function. Then we establish a generalized result of Park (2014) to an arbitrary consecutive order statistics. We derive a single integral form of the Kullback-Leibler information of an arbitrary block of order statistics; in addition, its relation to the Fisher information of order statistics is discussed with numerical examples provided.

Image Restoration Algorithms by using Fisher Information (피셔 인포메이션을 이용한 영상 복원 알고리즘)

  • 오춘석;이현민;신승중;유영기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.89-97
    • /
    • 2004
  • An object to reflect or emit light is captured by imaging system as distorted image due to various distortion. It is called image restoration that estimates original object by removing distortion. There are two categories in image restoration method. One is a deterministic method and the other is a stochastic method. In this paper, image restoration using Minimum Fisher Information(MFI), derived from B. Roy Frieden is proposed. In MFI restoration, experimental results to be made according to noise control parameter were investigated. And cross entropy(Kullback-Leibler entropy) was used as a standard measure of restoration accuracy, It is confirmed that restoration results using MFI have various roughness according to noise control parameter.

On Information Theoretic Index for Measuring the Stochastic Dependence Among Sets of Variates

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.1
    • /
    • pp.131-146
    • /
    • 1997
  • In this paper the problem of measuring the stochastic dependence among sets fo random variates is considered, and attention is specifically directed to forming a single well-defined measure of the dependence among sets of normal variates. A new information theoretic measure of the dependence called dependence index (DI) is introduced and its several properties are studied. The development of DI is based on the generalization and normalization of the mutual information introduced by Kullback(1968). For data analysis, minimum cross entropy estimator of DI is suggested, and its asymptotic distribution is obtained for testing the existence of the dependence. Monte Carlo simulations demonstrate the performance of the estimator, and show that is is useful not only for evaluation of the dependence, but also for independent model testing.

  • PDF

A study on the performance improvement of learning based on consistency regularization and unlabeled data augmentation (일치성규칙과 목표값이 없는 데이터 증대를 이용하는 학습의 성능 향상 방법에 관한 연구)

  • Kim, Hyunwoong;Seok, Kyungha
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.167-175
    • /
    • 2021
  • Semi-supervised learning uses both labeled data and unlabeled data. Recently consistency regularization is very popular in semi-supervised learning. Unsupervised data augmentation (UDA) that uses unlabeled data augmentation is also based on the consistency regularization. The Kullback-Leibler divergence is used for the loss of unlabeled data and cross-entropy for the loss of labeled data through UDA learning. UDA uses techniques such as training signal annealing (TSA) and confidence-based masking to promote performance. In this study, we propose to use Jensen-Shannon divergence instead of Kullback-Leibler divergence, reverse-TSA and not to use confidence-based masking for performance improvement. Through experiment, we show that the proposed technique yields better performance than those of UDA.

Multi Agents-Multi Tasks Assignment Problem using Hybrid Cross-Entropy Algorithm (혼합 교차-엔트로피 알고리즘을 활용한 다수 에이전트-다수 작업 할당 문제)

  • Kim, Gwang
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.4
    • /
    • pp.37-45
    • /
    • 2022
  • In this paper, a multi agent-multi task assignment problem, which is a representative problem of combinatorial optimization, is presented. The objective of the problem is to determine the coordinated agent-task assignment that maximizes the sum of the achievement rates of each task. The achievement rate is represented as a concave down increasing function according to the number of agents assigned to the task. The problem is expressed as an NP-hard problem with a non-linear objective function. In this paper, to solve the assignment problem, we propose a hybrid cross-entropy algorithm as an effective and efficient solution methodology. In fact, the general cross-entropy algorithm might have drawbacks (e.g., slow update of parameters and premature convergence) according to problem situations. Compared to the general cross-entropy algorithm, the proposed method is designed to be less likely to have the two drawbacks. We show that the performances of the proposed methods are better than those of the general cross-entropy algorithm through numerical experiments.