• Title/Summary/Keyword: Posterior inference

Search Result 91, Processing Time 0.022 seconds

Posterior Inference in Single-Index Models

  • Park, Chun-Gun;Yang, Wan-Yeon;Kim, Yeong-Hwa
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.1
    • /
    • pp.161-168
    • /
    • 2004
  • A single-index model is useful in fields which employ multidimensional regression models. Many methods have been developed in parametric and nonparametric approaches. In this paper, posterior inference is considered and a wavelet series is thought of as a function approximated to a true function in the single-index model. The posterior inference needs a prior distribution for each parameter estimated. A prior distribution of each coefficient of the wavelet series is proposed as a hierarchical distribution. A direction $\beta$ is assumed with a unit vector and affects estimate of the true function. Because of the constraint of the direction, a transformation, a spherical polar coordinate $\theta$, of the direction is required. Since the posterior distribution of the direction is unknown, we apply a Metropolis-Hastings algorithm to generate random samples of the direction. Through a Monte Carlo simulation we investigate estimates of the true function and the direction.

Bayesian Inference on Variance Components Using Gibbs Sampling with Various Priors

  • Lee, C.;Wang, C.D.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.14 no.8
    • /
    • pp.1051-1056
    • /
    • 2001
  • Data for teat number for Landrace (L), Yorkshire (Y), crossbred of Landrace and Yorkshire (LY), and crossbred of Landrace, Yorkshire and Chinese indigenous Min Pig (LYM) were analyzed using Gibbs sampling. In Bayesian inference, flat priors and some informative priors were used to examine their influence on posterior estimates. The posterior mean estimates of heritabilities with flat priors were $0.661{\pm}0.035$ for L, $0.540{\pm}0.072$ for Y, $0.789{\pm}0.074$ for LY, and $0.577{\pm}0.058$ for LYM, and they did not differ (p>0.05) from their corresponding estimates of REML. When inverse Gamma densities for variance components were used as priors with the shape parameter of 4, the posterior estimates were still corresponding (p>0.05) to REML estimates and mean estimates using Gibbs sampling with flat priors. However, when the inverse Gamma densities with the shape parameter of 10 were utilized, some posterior estimates differed (p<0.10) from REML estimates and/or from other Gibbs mean estimates. The use of moderate degree of belief was influential to the posterior estimates, especially for Y and for LY where data sizes were small. When the data size is small, REML estimates of variance components have unknown distributions. On the other hand, Bayesian approach gives exact posterior densities of variance components. However, when the data size is small and prior knowledge is lacked, researchers should be careful with even moderate priors.

Variational Expectation-Maximization Algorithm in Posterior Distribution of a Latent Dirichlet Allocation Model for Research Topic Analysis

  • Kim, Jong Nam
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.883-890
    • /
    • 2020
  • In this paper, we propose a variational expectation-maximization algorithm that computes posterior probabilities from Latent Dirichlet Allocation (LDA) model. The algorithm approximates the intractable posterior distribution of a document term matrix generated from a corpus made up by 50 papers. It approximates the posterior by searching the local optima using lower bound of the true posterior distribution. Moreover, it maximizes the lower bound of the log-likelihood of the true posterior by minimizing the relative entropy of the prior and the posterior distribution known as KL-Divergence. The experimental results indicate that documents clustered to image classification and segmentation are correlated at 0.79 while those clustered to object detection and image segmentation are highly correlated at 0.96. The proposed variational inference algorithm performs efficiently and faster than Gibbs sampling at a computational time of 0.029s.

Posterior density estimation for structural parameters using improved differential evolution adaptive Metropolis algorithm

  • Zhou, Jin;Mita, Akira;Mei, Liu
    • Smart Structures and Systems
    • /
    • v.15 no.3
    • /
    • pp.735-749
    • /
    • 2015
  • The major difficulty of using Bayesian probabilistic inference for system identification is to obtain the posterior probability density of parameters conditioned by the measured response. The posterior density of structural parameters indicates how plausible each model is when considering the uncertainty of prediction errors. The Markov chain Monte Carlo (MCMC) method is a widespread medium for posterior inference but its convergence is often slow. The differential evolution adaptive Metropolis-Hasting (DREAM) algorithm boasts a population-based mechanism, which nms multiple different Markov chains simultaneously, and a global optimum exploration ability. This paper proposes an improved differential evolution adaptive Metropolis-Hasting algorithm (IDREAM) strategy to estimate the posterior density of structural parameters. The main benefit of IDREAM is its efficient MCMC simulation through its use of the adaptive Metropolis (AM) method with a mutation strategy for ensuring quick convergence and robust solutions. Its effectiveness was demonstrated in simulations on identifying the structural parameters with limited output data and noise polluted measurements.

Variational Bayesian inference for binary image restoration using Ising model

  • Jang, Moonsoo;Chung, Younshik
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2022
  • In this paper, the focus on the removal noise in the binary image based on the variational Bayesian method with the Ising model. The observation and the latent variable are the degraded image and the original image, respectively. The posterior distribution is built using the Markov random field and the Ising model. Estimating the posterior distribution is the same as reconstructing a degraded image. MCMC and variational Bayesian inference are two methods for estimating the posterior distribution. However, for the sake of computing efficiency, we adapt the variational technique. When the image is restored, the iterative method is used to solve the recursive problem. Since there are three model parameters in this paper, restoration is implemented using the VECM algorithm to find appropriate parameters in the current state. Finally, the restoration results are shown which have maximum peak signal-to-noise ratio (PSNR) and evidence lower bound (ELBO).

A Study on Analysis of Likelihood Principle and its Educational Implications (우도원리에 대한 분석과 그에 따른 교육적 시사점에 대한 연구)

  • Park, Sun Yong;Yoon, Hyoung Seok
    • The Mathematical Education
    • /
    • v.55 no.2
    • /
    • pp.193-208
    • /
    • 2016
  • This study analyzes the likelihood principle and elicits an educational implication. As a result of analysis, this study shows that Frequentist and Bayesian interpret the principle differently by assigning different role to that principle from each other. While frequentist regards the principle as 'the principle forming a basis for statistical inference using the likelihood ratio' through considering the likelihood as a direct tool for statistical inference, Bayesian looks upon the principle as 'the principle providing a basis for statistical inference using the posterior probability' by looking at the likelihood as a means for updating. Despite this distinction between two methods of statistical inference, two statistics schools get clues to compromise in a regard of using frequency prior probability. According to this result, this study suggests the statistics education that is a help to building of students' critical eye by their comparing inferences based on likelihood and posterior probability in the learning and teaching of updating process from frequency prior probability to posterior probability.

Robust Bayesian inference in finite population sampling with auxiliary information under balanced loss function

  • Kim, Eunyoung;Kim, Dal Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.3
    • /
    • pp.685-696
    • /
    • 2014
  • In this paper, we develop Bayesian inference of the finite population mean with the assumption of posterior linearity rather than normality of the superpopulation in the presence of auxiliary information under the balanced loss function. We compare the performance of the optimal Bayes estimator under the balanced loss function with ones of the classical ratio estimator and the usual Bayes estimator in terms of the posterior expected losses, risks and Bayes risks.

A variational Bayes method for pharmacokinetic model (약물동태학 모형에 대한 변분 베이즈 방법)

  • Parka, Sun;Jo, Seongil;Lee, Woojoo
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.1
    • /
    • pp.9-23
    • /
    • 2021
  • In the following paper we introduce a variational Bayes method that approximates posterior distributions with mean-field method. In particular, we introduce automatic differentiation variation inference (ADVI), which approximates joint posterior distributions using the product of Gaussian distributions after transforming parameters into real coordinate space, and then apply it to pharmacokinetic models that are models for the study of the time course of drug absorption, distribution, metabolism and excretion. We analyze real data sets using ADVI and compare the results with those based on Markov chain Monte Carlo. We implement the algorithms using Stan.

Bayesian Analysis of Randomized Response Models : A Gibbs Sampling Approach

  • Oh, Man-Suk
    • Journal of the Korean Statistical Society
    • /
    • v.23 no.2
    • /
    • pp.463-482
    • /
    • 1994
  • In Bayesian analysis of randomized response models, the likelihood function does not combine tractably with typical priors for the parameters of interest, causing computational difficulties in posterior analysis of the parameters of interest. In this article, the difficulties are solved by introducing appropriate latent variables to the model and using the Gibbs sampling algorithm.

  • PDF