• Title/Summary/Keyword: Posterior Probability

Search Result 224, Processing Time 0.021 seconds

Bayesian Inference for Stress-Strength Systems

  • Chang, In-Hong;Kim, Byung-Hwee
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.27-34
    • /
    • 2005
  • We consider the problem of estimating the system reliability noninformative priors when both stress and strength follow generalized gamma distributions. We first derive Jeffreys' prior, group ordering reference priors, and matching priors. We investigate the propriety of posterior distributions and provide marginal posterior distributions under those noninformative priors. We also examine whether the reference priors satisfy the probability matching criterion.

  • PDF

새로운 모형기반 군집분석 알고리즘

  • Park, Jeong-Su;Hwang, Hyeon-Sik
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2005.11a
    • /
    • pp.97-100
    • /
    • 2005
  • A new model-based clustering algorithm is proposed. The idea starts from the assumption that observations are realizations of Gaussian processes and so are correlated. With a special covariance structure, the posterior probability that an observation belongs to each cluster is computed using the ECM algorithm. A preliminary result of small-scale simulation study is given to compare with the k-means clustering algorithms.

  • PDF

Bayesian Variable Selection in Linear Regression Models with Inequality Constraints on the Coefficients (제한조건이 있는 선형회귀 모형에서의 베이지안 변수선택)

  • 오만숙
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.1
    • /
    • pp.73-84
    • /
    • 2002
  • Linear regression models with inequality constraints on the coefficients are frequently used in economic models due to sign or order constraints on the coefficients. In this paper, we propose a Bayesian approach to selecting significant explanatory variables in linear regression models with inequality constraints on the coefficients. Bayesian variable selection requires computation of posterior probability of each candidate model. We propose a method which computes all the necessary posterior model probabilities simultaneously. In specific, we obtain posterior samples form the most general model via Gibbs sampling algorithm (Gelfand and Smith, 1990) and compute the posterior probabilities by using the samples. A real example is given to illustrate the method.

On the Development of Probability Matching Priors for Non-regular Pareto Distribution

  • Lee, Woo Dong;Kang, Sang Gil;Cho, Jang Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.2
    • /
    • pp.333-339
    • /
    • 2003
  • In this paper, we develop the probability matching priors for the parameters of non-regular Pareto distribution. We prove the propriety of joint posterior distribution induced by probability matching priors. Through the simulation study, we show that the proposed probability matching Prior matches the coverage probabilities in a frequentist sense. A real data example is given.

Bayesian multiple comparisons in Freund's bivariate exponential populations with type I censored data

  • Cho, Jang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.3
    • /
    • pp.569-574
    • /
    • 2010
  • We consider two components system which have Freund's bivariate exponential model. In this case, Bayesian multiple comparisons procedure for failure rates is sug-gested in K Freund's bivariate exponential populations. Here we assume that the com-ponents enter the study at random over time and the analysis is carried out at some prespeci ed time. We derive fractional Bayes factor for all comparisons under non- informative priors for the parameters and calculate the posterior probabilities for all hypotheses. And we select a hypotheses which has the highest posterior probability as best model. Finally, we give a numerical examples to illustrate our procedure.

Bayesian Method for the Multiple Test of an Autoregressive Parameter in Stationary AR(L) Model (AR(1)모형에서 자기회귀계수의 다중검정을 위한 베이지안방법)

  • 김경숙;손영숙
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.141-150
    • /
    • 2003
  • This paper presents the multiple testing method of an autoregressive parameter in stationary AR(1) model using the usual Bayes factor. As prior distributions of parameters in each model, uniform prior and noninformative improper priors are assumed. Posterior probabilities through the usual Bayes factors are used for the model selection. Finally, to check whether these theoretical results are correct, simulated data and real data are analyzed.

A Predictive Two-Group Multinormal Classification Rule Accounting for Model Uncertainty

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.4
    • /
    • pp.477-491
    • /
    • 1997
  • A new predictive classification rule for assigning future cases into one of two multivariate normal population (with unknown normal mixture model) is considered. The development involves calculation of posterior probability of each possible normal-mixture model via a default Bayesian test criterion, called intrinsic Bayes factor, and suggests predictive distribution for future cases to be classified that accounts for model uncertainty by weighting the effect of each model by its posterior probabiliy. In this paper, our interest is focused on constructing the classification rule that takes care of uncertainty about the types of covariance matrices (homogeneity/heterogeneity) involved in the model. For the constructed rule, a Monte Carlo simulation study demonstrates routine application and notes benefits over traditional predictive calssification rule by Geisser (1982).

  • PDF

A Bayesian Variable Selection Method for Binary Response Probit Regression

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.28 no.2
    • /
    • pp.167-182
    • /
    • 1999
  • This article is concerned with the selection of subsets of predictor variables to be included in building the binary response probit regression model. It is based on a Bayesian approach, intended to propose and develop a procedure that uses probabilistic considerations for selecting promising subsets. This procedure reformulates the probit regression setup in a hierarchical normal mixture model by introducing a set of hyperparameters that will be used to identify subset choices. The appropriate posterior probability of each subset of predictor variables is obtained through the Gibbs sampler, which samples indirectly from the multinomial posterior distribution on the set of possible subset choices. Thus, in this procedure, the most promising subset of predictors can be identified as the one with highest posterior probability. To highlight the merit of this procedure a couple of illustrative numerical examples are given.

  • PDF

Independent Testing in Marshall and Olkin's Bivariate Exponential Model Using Fractional Bayes Factor Under Bivariate Type I Censorship

  • Cho, Kil-Ho;Cho, Jang-Sik;Choi, Seung-Bae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1391-1396
    • /
    • 2008
  • In this paper, we consider two components system which the lifetimes have Marshall and Olkin's bivariate exponential model with bivariate type I censored data. We propose a Bayesian independent test procedure for above model using fractional Bayes factor method by O'Hagan based on improper prior distributions. And we compute the fractional Bayes factor and the posterior probabilities for the hypotheses, respectively. Also we select a hypothesis which has the largest posterior probability. Finally a numerical example is given to illustrate our Bayesian testing procedure.

  • PDF

A Bayesian Method for Narrowing the Scope of Variable Selection in Binary Response Logistic Regression

  • Kim, Hea-Jung;Lee, Ae-Kyung
    • Journal of Korean Society for Quality Management
    • /
    • v.26 no.1
    • /
    • pp.143-160
    • /
    • 1998
  • This article is concerned with the selection of subsets of predictor variables to be included in bulding the binary response logistic regression model. It is based on a Bayesian aproach, intended to propose and develop a procedure that uses probabilistic considerations for selecting promising subsets. This procedure reformulates the logistic regression setup in a hierarchical normal mixture model by introducing a set of hyperparameters that will be used to identify subset choices. It is done by use of the fact that cdf of logistic distribution is a, pp.oximately equivalent to that of $t_{(8)}$/.634 distribution. The a, pp.opriate posterior probability of each subset of predictor variables is obtained by the Gibbs sampler, which samples indirectly from the multinomial posterior distribution on the set of possible subset choices. Thus, in this procedure, the most promising subset of predictors can be identified as that with highest posterior probability. To highlight the merit of this procedure a couple of illustrative numerical examples are given.

  • PDF