• Title/Summary/Keyword: 깁스모형

Search Result 58, Processing Time 0.017 seconds

Bayesian Inference for Littlewood-Verrall Reliability Model

  • Choi, Ki-Heon;Choi, Hae-Ja
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2003
  • In this paper we discuss Bayesian computation and model selection for Littlewood-Verrall model using Gibbs sampling. A numerical example with a simulated data is given.

  • PDF

The Comparison of Parameter Estimation for Nonhomogeneous Poisson Process Software Reliability Model (NHPP 소프트웨어 신뢰도 모형에 대한 모수 추정 비교)

  • Kim, Hee-Cheul;Lee, Sang-Sik;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1269-1276
    • /
    • 2004
  • The Parameter Estimation for software existing reliability models, Goel-Okumoto, Yamada-Ohba-Osaki model was reviewed and Rayleigh model based on Rayleigh distribution was studied. In this paper, we discusses comparison of parameter estimation using maximum likelihood estimator and Bayesian estimation based on Gibbs sampling to analysis of the estimator' pattern. Model selection based on sum of the squared errors and Braun statistic, for the sake of efficient model, was employed. A numerical example was illustrated using real data. The current areas and models of Superposition, mixture for future development are also employed.

A Bayesian zero-inflated negative binomial regression model based on Pólya-Gamma latent variables with an application to pharmaceutical data (폴랴-감마 잠재변수에 기반한 베이지안 영과잉 음이항 회귀모형: 약학 자료에의 응용)

  • Seo, Gi Tae;Hwang, Beom Seuk
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.311-325
    • /
    • 2022
  • For count responses, the situation of excess zeros often occurs in various research fields. Zero-inflated model is a common choice for modeling such count data. Bayesian inference for the zero-inflated model has long been recognized as a hard problem because the form of conditional posterior distribution is not in closed form. Recently, however, Pillow and Scott (2012) and Polson et al. (2013) proposed a Pólya-Gamma data-augmentation strategy for logistic and negative binomial models, facilitating Bayesian inference for the zero-inflated model. We apply Bayesian zero-inflated negative binomial regression model to longitudinal pharmaceutical data which have been previously analyzed by Min and Agresti (2005). To facilitate posterior sampling for longitudinal zero-inflated model, we use the Pólya-Gamma data-augmentation strategy.

The Bayesian Analysis for Software Reliability Models Based on NHPP (비동질적 포아송과정을 사용한 소프트웨어 신뢰 성장모형에 대한 베이지안 신뢰성 분석에 관한 연구)

  • Lee, Sang-Sik;Kim, Hee-Cheul;Kim, Yong-Jae
    • The KIPS Transactions:PartD
    • /
    • v.10D no.5
    • /
    • pp.805-812
    • /
    • 2003
  • This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) and performs Bayesian inference using prior information. The failure process is analyzed to develop a suitable mean value function for the NHPP; expressions are given for several performance measure. The parametric inferences of the model using Logarithmic Poisson model, Crow model and Rayleigh model is discussed. Bayesian computation and model selection using the sum of squared errors. The numerical results of this models are applied to real software failure data. Tools of parameter inference was used method of Gibbs sampling and Metropolis algorithm. The numerical example by T1 data (Musa) was illustrated.

A Study on Bayesian Approach of Software Stochastic Reliability Superposition Model using General Order Statistics (일반 순서 통계량을 이용한 소프트웨어 신뢰확률 중첩모형에 관한 베이지안 접근에 관한 연구)

  • Lee, Byeong-Su;Kim, Hui-Cheol;Baek, Su-Gi;Jeong, Gwan-Hui;Yun, Ju-Yong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.8
    • /
    • pp.2060-2071
    • /
    • 1999
  • The complicate software failure system is defined to the superposition of the points of failure from several component point process. Because the likelihood function is difficulty in computing, we consider Gibbs sampler using iteration sampling based method. For each observed failure epoch, we applied to latent variables that indicates with component of the superposition mode. For model selection, we explored the posterior Bayesian criterion and the sum of relative errors for the comparison simple pattern with superposition model. A numerical example with NHPP simulated data set applies the thinning method proposed by Lewis and Shedler[25] is given, we consider Goel-Okumoto model and Weibull model with GOS, inference of parameter is studied. Using the posterior Bayesian criterion and the sum of relative errors, as we would expect, the superposition model is best on model under diffuse priors.

  • PDF

Bayesian inference on multivariate asymmetric jump-diffusion models (다변량 비대칭 라플라스 점프확산 모형의 베이지안 추론)

  • Lee, Youngeun;Park, Taeyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.1
    • /
    • pp.99-112
    • /
    • 2016
  • Asymmetric jump-diffusion models are effectively used to model the dynamic behavior of asset prices with abrupt asymmetric upward and downward changes. However, the estimation of their extension to the multivariate asymmetric jump-diffusion model has been hampered by the analytically intractable likelihood function. This article confronts the problem using a data augmentation method and proposes a new Bayesian method for a multivariate asymmetric Laplace jump-diffusion model. Unlike the previous models, the proposed model is rich enough to incorporate all possible correlated jumps as well as mention individual and common jumps. The proposed model and methodology are illustrated with a simulation study and applied to daily returns for the KOSPI, S&P500, and Nikkei225 indices data from January 2005 to September 2015.

Bayesian analysis of finite mixture model with cluster-specific random effects (군집 특정 변량효과를 포함한 유한 혼합 모형의 베이지안 분석)

  • Lee, Hyejin;Kyung, Minjung
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.57-68
    • /
    • 2017
  • Clustering algorithms attempt to find a partition of a finite set of objects in to a potentially predetermined number of nonempty subsets. Gibbs sampling of a normal mixture of linear mixed regressions with a Dirichlet prior distribution calculates posterior probabilities when the number of clusters was known. Our approach provides simultaneous partitioning and parameter estimation with the computation of classification probabilities. A Monte Carlo study of curve estimation results showed that the model was useful for function estimation. Examples are given to show how these models perform on real data.

The Bayesian Inference for Software Reliability Models Based on NHPP (NHPP에 기초한 소프트웨어 신뢰도 모형에 대한 베이지안 추론에 관한 연구)

  • Lee, Sang-Sik;Kim, Hui-Cheol;Song, Yeong-Jae
    • The KIPS Transactions:PartD
    • /
    • v.9D no.3
    • /
    • pp.389-398
    • /
    • 2002
  • Software reliability growth models are used in testing stages of software development to model the error content and time intervals between software failures. This paper presents a stochastic model for the software failure phenomenon based on a nonhomogeneous Poisson process(NHPP) and performs Bayesian inference using prior information. The failure process is analyzed to develop a suitable mean value function for the NHPP ; expressions are given for several performance measure. Actual software failure data are compared with several model on the constant reflecting the quality of testing. The performance measures and parametric inferences of the suggested models using Rayleigh distribution and Laplace distribution are discussed. The results of the suggested models are applied to real software failure data and compared with Goel model. Tools of parameter point inference and 95% credible intereval was used method of Gibbs sampling. In this paper, model selection using the sum of the squared errors was employed. The numerical example by NTDS data was illustrated.

Estimation of the Mixture of Normals of Saving Rate Using Gibbs Algorithm (Gibbs알고리즘을 이용한 저축률의 정규분포혼합 추정)

  • Yoon, Jong-In
    • Journal of Digital Convergence
    • /
    • v.13 no.10
    • /
    • pp.219-224
    • /
    • 2015
  • This research estimates the Mixture of Normals of households saving rate in Korea. Our sample is MDSS, micro-data in 2014 and Gibbs algorithm is used to estimate the Mixture of Normals. Evidences say some results. First, Gibbs algorithm works very well in estimating the Mixture of Normals. Second, Saving rate data has at least two components, one with mean zero and the other with mean 29.4%. It might be that households would be separated into high saving group and low saving group. Third, analysis of Mixture of Normals cannot answer that question and we find that income level and age cannot explain our results.

Introduction to the Indian Buffet Process: Theory and Applications (인도부페 프로세스의 소개: 이론과 응용)

  • Lee, Youngseon;Lee, Kyoungjae;Lee, Kwangmin;Lee, Jaeyong;Seo, Jinwook
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.251-267
    • /
    • 2015
  • The Indian Buffet Process is a stochastic process on equivalence classes of binary matrices having finite rows and infinite columns. The Indian Buffet Process can be imposed as the prior distribution on the binary matrix in an infinite feature model. We describe the derivation of the Indian buffet process from a finite feature model, and briefly explain the relation between the Indian buffet process and the beta process. Using a Gaussian linear model, we describe three algorithms: Gibbs sampling algorithm, Stick-breaking algorithm and variational method, with application for finding features in image data. We also illustrate the use of the Indian Buffet Process in various type of analysis such as dyadic data analysis, network data analysis and independent component analysis.