• Title/Summary/Keyword: 모수적 추정방법

Search Result 414, Processing Time 0.024 seconds

A Bayesian Extreme Value Analysis of KOSPI Data (코스피 지수 자료의 베이지안 극단값 분석)

  • Yun, Seok-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.833-845
    • /
    • 2011
  • This paper conducts a statistical analysis of extreme values for both daily log-returns and daily negative log-returns, which are computed using a collection of KOSPI data from January 3, 1998 to August 31, 2011. The Poisson-GPD model is used as a statistical analysis model for extreme values and the maximum likelihood method is applied for the estimation of parameters and extreme quantiles. To the Poisson-GPD model is also added the Bayesian method that assumes the usual noninformative prior distribution for the parameters, where the Markov chain Monte Carlo method is applied for the estimation of parameters and extreme quantiles. According to this analysis, both the maximum likelihood method and the Bayesian method form the same conclusion that the distribution of the log-returns has a shorter right tail than the normal distribution, but that the distribution of the negative log-returns has a heavier right tail than the normal distribution. An advantage of using the Bayesian method in extreme value analysis is that there is nothing to worry about the classical asymptotic properties of the maximum likelihood estimators even when the regularity conditions are not satisfied, and that in prediction it is effective to reflect the uncertainties from both the parameters and a future observation.

Sample Size Determination of Univariate and Bivariate Ordinal Outcomes by Nonparametric Wilcoxon Tests (단변량 및 이변량 순위변수의 비모수적 윌콕슨 검정법에 의한 표본수 결정방법)

  • Park, Hae-Gang;Song, Hae-Hiang
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.6
    • /
    • pp.1249-1263
    • /
    • 2009
  • The power function in sample size determination has to be characterized by an appropriate statistical test for the hypothesis of interest. Nonparametric tests are suitable in the analysis of ordinal data or frequency data with ordered categories which appear frequently in the biomedical research literature. In this paper, we study sample size calculation methods for the Wilcoxon-Mann-Whitney test for one- and two-dimensional ordinal outcomes. While the sample size formula for the univariate outcome which is based on the variances of the test statistic under both null and alternative hypothesis perform well, this formula requires additional information on probability estimates that appear in the variance of the test statistic under alternative hypothesis, and the values of these probabilities are generally unknown. We study the advantages and disadvantages of different sample size formulas with simulations. Sample sizes are calculated for the two-dimensional ordinal outcomes of efficacy and safety, for which bivariate Wilcoxon-Mann-Whitney test is appropriate than the multivariate parametric test.

Simulation Study for Statistical Methods in Comparing Cure Rates between Two Groups (모의실험을 통한 두 처리군간 치료율 비교방법 연구)

  • 박미라;이재원;진서훈
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.2
    • /
    • pp.253-267
    • /
    • 2004
  • In some clinical trials, one may see that a significant fraction of patients are cured and their original disease does not recur even after termination of treatment and pro-longed follow-up. This situation occurs frequently in pediatric cancer trials where there are excellent therapeutic results. In such cases, interest concentrated on the difference of cure rates rather than other types of differences in failure distributions. Various authors have investigated the parametric and nonparametric methods for testing the difference of cure rates. In this study, we compare by simulation the power and size of a parametric test and five nonparametric tests in a various range of the alternatives, censoring rates and cure rates. Our objectives are to determine if any test was preferable on the basis of size and power in various situation, and to investigate the effect of the model misspecification.

Comparison of GEE Estimation Methods for Repeated Binary Data with Time-Varying Covariates on Different Missing Mechanisms (시간-종속적 공변량이 포함된 이분형 반복측정자료의 GEE를 이용한 분석에서 결측 체계에 따른 회귀계수 추정방법 비교)

  • Park, Boram;Jung, Inkyung
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.5
    • /
    • pp.697-712
    • /
    • 2013
  • When analyzing repeated binary data, the generalized estimating equations(GEE) approach produces consistent estimates for regression parameters even if an incorrect working correlation matrix is used. However, time-varying covariates experience larger changes in coefficients than time-invariant covariates across various working correlation structures for finite samples. In addition, the GEE approach may give biased estimates under missing at random(MAR). Weighted estimating equations and multiple imputation methods have been proposed to reduce biases in parameter estimates under MAR. This article studies if the two methods produce robust estimates across various working correlation structures for longitudinal binary data with time-varying covariates under different missing mechanisms. Through simulation, we observe that time-varying covariates have greater differences in parameter estimates across different working correlation structures than time-invariant covariates. The multiple imputation method produces more robust estimates under any working correlation structure and smaller biases compared to the other two methods.

The Comparative Study of Software Optimal Release Time of Finite NHPP Model Considering Log Linear Learning Factor (로그선형 학습요인을 이용한 유한고장 NHPP모형에 근거한 소프트웨어 최적방출시기 비교 연구)

  • Cheul, Kim Hee;Cheul, Shin Hyun
    • Convergence Security Journal
    • /
    • v.12 no.6
    • /
    • pp.3-10
    • /
    • 2012
  • In this paper, make a study decision problem called an optimal release policies after testing a software system in development phase and transfer it to the user. When correcting or modifying the software, finite failure non-homogeneous Poisson process model, considering learning factor, presented and propose release policies of the life distribution, log linear type model which used to an area of reliability because of various shape and scale parameter. In this paper, discuss optimal software release policies which minimize a total average software cost of development and maintenance under the constraint of satisfying a software reliability requirement. In a numerical example, the parameters estimation using maximum likelihood estimation of failure time data, make out estimating software optimal release time.

The Study for NHPP Software Reliability Growth Model of Percentile Change-point (백분위수 변화점을 고려한 NHPP 소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • Convergence Security Journal
    • /
    • v.8 no.4
    • /
    • pp.115-120
    • /
    • 2008
  • Accurate predictions of software release times, and estimation of the reliability and availability of a software product require quantification of a critical element of the software testing process: Change-point problem. In this paper, exponential (Goel-Okumoto) model was reviewed, proposes the percentile change-point problem, which maked out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE statistics, for the sake of efficient model, was employed. Using NTDS data, The numerical example of percentilechange-point problemi s presented.

  • PDF

자료원 사이의 종속성을 고려한 일반기기 신뢰도 데이타베이스 구축

  • 황미정;정원대;임태진
    • Proceedings of the Korean Nuclear Society Conference
    • /
    • 1997.05a
    • /
    • pp.527-533
    • /
    • 1997
  • 문헌자료 간의 종속성을 고려한 베이지안(Bayesian) 방법을 개발하였으며, 이를 바탕으로 원자력발전소의 일반 기기 신뢰도 데이타 베이스를 구축하였다. 기존에 개발되어 사용되어 온 3단계 베이지안 자료 분석 코드인 MPRDP (Multi-Purpose Reliability Data Process) [1,2,3]는 기존의 신뢰도 데이타 베이스 계산 코드들과는 달리 문헌자료를 2단계에서 처리한 후 3단계에서 발전소 고유 자료를 처리하여 계산하도록 개발되었다. 그러나 이전에는 일반 자료들간의 종속성을 고려하지 못하고 동일한 자료원을 근거로 만들어진 여러 자료원들을 모두 독립적인 것으로 처리하였다. 본 논문에서는 모수적 선험적 베이지안 방법의 일종인 ML-IIl(Type II Maximum Likelihood) 방법을 이용하여 자료들 간의 종속성을 처리[5]하였다. 솔레노이드 구동밸브를 예로 종속성 처리에 따른 분석 결과의 차이를 보여 주었으며, 또한 일부 기기에 대한 국내 고유 자료를 바탕으로 MPRDP를 통한 기기 신뢰도를 추정하였다.

  • PDF

민감한 정보를 얻기 위한 대체 전략에 관한 연구

  • Hong, Gi-Hak;Lee, Gi-Seong;Son, Chang-Gyun
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2003.10a
    • /
    • pp.195-199
    • /
    • 2003
  • Hansen과 Hurwitz(1946)는 우편조사에서의 무응답 문제를 처리하는 방법으로 표본을 응답결과에 따라 응답층과 무응답층으로 나눈 다음, 무응답층의 일부를 랜덤 추출하여 면대면 직접조사에 의해 무응답층의 정보를 얻는 방법을 제안하였다. 본 연구에서는 민감한 모집단에 대한 자료수집 방법으로 직접질문 방법인 Black-Box 방법과 간접질문 방법인 확률화응답기법(RRT)의 결합적 방법을 제시하였고, 층화이중 추출방법을 이용하여 모수를 추정하였다.

  • PDF

A Combined Randomized Response Technique Using Stratified Two-Phase Sampling (층화이중추출을 이용한 결합 확률화응답기법)

  • 홍기학
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.2
    • /
    • pp.303-310
    • /
    • 2004
  • We suggest a method to procure information from the sensitive population which combine a direct survey method, BB and an indirect survey one, RRT, and a combined estimator that uses the stratified double sampling to estimate the sensitive parameter. We compare the efficiency of our estimator with that of Mangat and Singh model.

Application of GIS-based Probabilistic Empirical and Parametric Models for Landslide Susceptibility Analysis (산사태 취약성 분석을 위한 GIS 기반 확률론적 추정 모델과 모수적 모델의 적용)

  • Park, No-Wook;Chi, Kwang-Hoon;Chung, Chang-Jo F.;Kwon, Byung-Doo
    • Economic and Environmental Geology
    • /
    • v.38 no.1
    • /
    • pp.45-55
    • /
    • 2005
  • Traditional GIS-based probabilistic spatial data integration models for landslide susceptibility analysis have failed to provide the theoretical backgrounds and effective methods for integration of different types of spatial data such as categorical and continuous data. This paper applies two spatial data integration models including non-parametric empirical estimation and parametric predictive discriminant analysis models that can directly use the original continuous data within a likelihood ratio framework. Similarity rates and a prediction rate curve are computed to quantitatively compare those two models. To illustrate the proposed models, two case studies from the Jangheung and Boeun areas were carried out and analyzed. As a result of the Jangheung case study, two models showed similar prediction capabilities. On the other hand, in the Boeun area, the parametric predictive discriminant analysis model showed the better prediction capability than that from the non-parametric empirical estimation model. In conclusion, the proposed models could effectively integrate the continuous data for landslide susceptibility analysis and more case studies should be carried out to support the results from the case studies, since each model has a distinctive feature in continuous data representation.