• Title/Summary/Keyword: Bayes test

Search Result 110, Processing Time 0.021 seconds

Bayesian Method for the Multiple Test of an Autoregressive Parameter in Stationary AR(L) Model (AR(1)모형에서 자기회귀계수의 다중검정을 위한 베이지안방법)

  • 김경숙;손영숙
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.1
    • /
    • pp.141-150
    • /
    • 2003
  • This paper presents the multiple testing method of an autoregressive parameter in stationary AR(1) model using the usual Bayes factor. As prior distributions of parameters in each model, uniform prior and noninformative improper priors are assumed. Posterior probabilities through the usual Bayes factors are used for the model selection. Finally, to check whether these theoretical results are correct, simulated data and real data are analyzed.

Independent Testing in Marshall and Olkin's Bivariate Exponential Model Using Fractional Bayes Factor Under Bivariate Type I Censorship

  • Cho, Kil-Ho;Cho, Jang-Sik;Choi, Seung-Bae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.4
    • /
    • pp.1391-1396
    • /
    • 2008
  • In this paper, we consider two components system which the lifetimes have Marshall and Olkin's bivariate exponential model with bivariate type I censored data. We propose a Bayesian independent test procedure for above model using fractional Bayes factor method by O'Hagan based on improper prior distributions. And we compute the fractional Bayes factor and the posterior probabilities for the hypotheses, respectively. Also we select a hypothesis which has the largest posterior probability. Finally a numerical example is given to illustrate our Bayesian testing procedure.

  • PDF

Bayesian Test of Quasi-Independence in a Sparse Two-Way Contingency Table

  • Kwak, Sang-Gyu;Kim, Dal-Ho
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.3
    • /
    • pp.495-500
    • /
    • 2012
  • We consider a Bayesian test of independence in a two-way contingency table that has some zero cells. To do this, we take a three-stage hierarchical Bayesian model under each hypothesis. For prior, we use Dirichlet density to model the marginal cell and each cell probabilities. Our method does not require complicated computation such as a Metropolis-Hastings algorithm to draw samples from each posterior density of parameters. We draw samples using a Gibbs sampler with a grid method. For complicated posterior formulas, we apply the Monte-Carlo integration and the sampling important resampling algorithm. We compare the values of the Bayes factor with the results of a chi-square test and the likelihood ratio test.

Jensen's Alpha Estimation Models in Capital Asset Pricing Model

  • Phuoc, Le Tan
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.5 no.3
    • /
    • pp.19-29
    • /
    • 2018
  • This research examined the alternatives of Jensen's alpha (α) estimation models in the Capital Asset Pricing Model, discussed by Treynor (1961), Sharpe (1964), and Lintner (1965), using the robust maximum likelihood type m-estimator (MM estimator) and Bayes estimator with conjugate prior. According to finance literature and practices, alpha has often been estimated using ordinary least square (OLS) regression method and monthly return data set. A sample of 50 securities is randomly selected from the list of the S&P 500 index. Their daily and monthly returns were collected over a period of the last five years. This research showed that the robust MM estimator performed well better than the OLS and Bayes estimators in terms of efficiency. The Bayes estimator did not perform better than the OLS estimator as expected. Interestingly, we also found that daily return data set would give more accurate alpha estimation than monthly return data set in all three MM, OLS, and Bayes estimators. We also proposed an alternative market efficiency test with the hypothesis testing Ho: α = 0 and was able to prove the S&P 500 index is efficient, but not perfect. More important, those findings above are checked with and validated by Jackknife resampling results.

Bayes estimation of entropy of exponential distribution based on multiply Type II censored competing risks data

  • Lee, Kyeongjun;Cho, Youngseuk
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1573-1582
    • /
    • 2015
  • In lifetime data analysis, it is generally known that the lifetimes of test items may not be recorded exactly. There are also situations wherein the withdrawal of items prior to failure is prearranged in order to decrease the time or cost associated with experience. Moreover, it is generally known that more than one cause or risk factor may be present at the same time. Therefore, analysis of censored competing risks data are needed. In this article, we derive the Bayes estimators for the entropy function under the exponential distribution with an unknown scale parameter based on multiply Type II censored competing risks data. The Bayes estimators of entropy function for the exponential distribution with multiply Type II censored competing risks data under the squared error loss function (SELF), precautionary loss function (PLF) and DeGroot loss function (DLF) are provided. Lindley's approximate method is used to compute these estimators.We compare the proposed Bayes estimators in the sense of the mean squared error (MSE) for various multiply Type II censored competing risks data. Finally, a real data set has been analyzed for illustrative purposes.

Estimations of the Parameters in a Two-component System Using Dependent Masked Data

  • Sarhan Ammar M.
    • International Journal of Reliability and Applications
    • /
    • v.6 no.2
    • /
    • pp.117-133
    • /
    • 2005
  • Estimations of the parameters included in a two-component system are derived based on masked system life test data, when the probability of masking depends upon the exact cause of system failure. Also estimations of reliability for the individual components at a specified mission time are derived. Maximum likelihood and Bayes methods are used to derive these estimators. The problem is explained on a series system consisting of two independent components each of which has a Pareto distributed lifetime. Further we present numerical studies using simulation.

  • PDF

Bayesian Testing for Independence in Bivariate Exponential Model

  • Cho, Jang-Sik
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.521-527
    • /
    • 2006
  • In this paper, we consider the Bayesian hypotheses testing for independence in bivariate exponential model. In Bayesian testing problem, we use the noninformative priors for parameters which are improper and are defined only up to arbitrary constants. And we use the recently proposed hypotheses testing criterion called the fractional Bayes factor. Also we give some numerical results to illustrate our results.

  • PDF

On Flexible Bayesian Test Criteria for Nested Point Null Hypotheses of Multiple Regression Coefficients

  • Jae-Hyun Kim;Hea-Jung Kim
    • Communications for Statistical Applications and Methods
    • /
    • v.3 no.3
    • /
    • pp.205-214
    • /
    • 1996
  • As flexible Bayesian test criteria for nested point null hypotheses of multiple regression coefficients, partial and overall Bayes factors are introduced under a class of intuitively meaningful prior. The criteria lead to a simple method for considering different prior beliefs on the subspaces that constitute a partition of the coefficient parameter space. A couple of tests are suggested based on the criteria. It is shown that they enable us to obtain pairwise comparisons of hypotheses of the partitioned subspaces. Through a Monte Carlo simulation, performance of the tests based on the criteria are compared with the usual Bayesian test (based on Bayes factor)in terms of their respective powers.

  • PDF

Parametric inference on step-stress accelerated life testing for the extension of exponential distribution under progressive type-II censoring

  • El-Dina, M.M. Mohie;Abu-Youssef, S.E.;Ali, Nahed S.A.;Abd El-Raheem, A.M.
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.4
    • /
    • pp.269-285
    • /
    • 2016
  • In this paper, a simple step-stress accelerated life test (ALT) under progressive type-II censoring is considered. Progressive type-II censoring and accelerated life testing are provided to decrease the lifetime of testing and lower test expenses. The cumulative exposure model is assumed when the lifetime of test units follows an extension of the exponential distribution. Maximum likelihood estimates (MLEs) and Bayes estimates (BEs) of the model parameters are also obtained. In addition, a real dataset is analyzed to illustrate the proposed procedures. Approximate, bootstrap and credible confidence intervals (CIs) of the estimators are then derived. Finally, the accuracy of the MLEs and BEs for the model parameters is investigated through simulation studies.

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF