• Title/Summary/Keyword: Models, statistical

Search Result 3,026, Processing Time 0.031 seconds

The Cusum of Squares Test for Variance Changes in Infinite Order Autoregressive Models

  • Park, Siyun;Lee, Sangyeol;Jongwoo Jeon
    • Journal of the Korean Statistical Society
    • /
    • v.29 no.3
    • /
    • pp.351-360
    • /
    • 2000
  • This paper considers the problem of testing a variance change in infinite order autoregressive models. A cusum of squares test based on the residuals from an AR(q) model is constructed analogous to Inclan and Tiao (1994)'s test statistic, where q is a sequence of positive integers diverging to $\infty$. It is shown that under regularity conditions the limiting distribution of the test statistic is the sup of a standard Brownian bridge. Simulation results are given to illustrate the performance of the test.

  • PDF

An EM Algorithm for a Doubly Smoothed MLE in Normal Mixture Models

  • Seo, Byung-Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.19 no.1
    • /
    • pp.135-145
    • /
    • 2012
  • It is well known that the maximum likelihood estimator(MLE) in normal mixture models with unequal variances does not fall in the interior of the parameter space. Recently, a doubly smoothed maximum likelihood estimator(DS-MLE) (Seo and Lindsay, 2010) was proposed as a general alternative to the ordinary maximum likelihood estimator. Although this method gives a natural modification to the ordinary MLE, its computation is cumbersome due to intractable integrations. In this paper, we derive an EM algorithm for the DS-MLE under normal mixture models and propose a fast computational tool using a local quadratic approximation. The accuracy and speed of the proposed method is then presented via some numerical studies.

Reject Inference of Incomplete Data Using a Normal Mixture Model

  • Song, Ju-Won
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.2
    • /
    • pp.425-433
    • /
    • 2011
  • Reject inference in credit scoring is a statistical approach to adjust for nonrandom sample bias due to rejected applicants. Function estimation approaches are based on the assumption that rejected applicants are not necessary to be included in the estimation, when the missing data mechanism is missing at random. On the other hand, the density estimation approach by using mixture models indicates that reject inference should include rejected applicants in the model. When mixture models are chosen for reject inference, it is often assumed that data follow a normal distribution. If data include missing values, an application of the normal mixture model to fully observed cases may cause another sample bias due to missing values. We extend reject inference by a multivariate normal mixture model to handle incomplete characteristic variables. A simulation study shows that inclusion of incomplete characteristic variables outperforms the function estimation approaches.

Statistical Analysis and Comparison of Fatigue Curve Models (피로곡선 모형의 통계적 분석 및 비교)

  • 서순근;조유희
    • Journal of Korean Society for Quality Management
    • /
    • v.31 no.2
    • /
    • pp.165-182
    • /
    • 2003
  • The fatigue has been considered to the most important failure mode where optimal design or reliability prediction of the machinery in aircraft, atomic reactors, and structure systems, etc., is required. When the statistical analysis of fatigue life data is performed, some difficulties are present because of the following facts : nonlinear relationship, heteroscedastic data, large scatter in the data, censored data (runouts), and existence of fatigue limit. To find the S-N curve models that characterize fatigue strength better, this research compares existing fatigue curve models developed recently in terms of the residual mean square and the estimate of fatigue limit, etc. for various fatigue data sets.

DEFAULT BAYESIAN INFERENCE OF REGRESSION MODELS WITH ARMA ERRORS UNDER EXACT FULL LIKELIHOODS

  • Son, Young-Sook
    • Journal of the Korean Statistical Society
    • /
    • v.33 no.2
    • /
    • pp.169-189
    • /
    • 2004
  • Under the assumption of default priors, such as noninformative priors, Bayesian model determination and parameter estimation of regression models with stationary and invertible ARMA errors are developed under exact full likelihoods. The default Bayes factors, the fractional Bayes factor (FBF) of O'Hagan (1995) and the arithmetic intrinsic Bayes factors (AIBF) of Berger and Pericchi (1996a), are used as tools for the selection of the Bayesian model. Bayesian estimates are obtained by running the Metropolis-Hastings subchain in the Gibbs sampler. Finally, the results of numerical studies, designed to check the performance of the theoretical results discussed here, are presented.

Forecast of Korea Defense Expenditures based on Time Series Models

  • Park, Kyung Ok;Jung, Hye-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.1
    • /
    • pp.31-40
    • /
    • 2015
  • This study proposes a mathematical model that can forecast national defense expenditures. The ongoing European debt crisis weighs heavily on markets; consequently, government spending in many countries will be constrained. However, a forecasting model to predict military spending is acutely needed for South Korea because security threats still exist and the estimation of military spending at a reasonable level is closely related to economic growth. This study establishes two models: an Auto-Regressive Moving Average model (ARIMA) based on past military expenditures and Transfer Function model with the Gross Domestic Product (GDP), exchange rate and consumer price index as input time series. The proposed models use defense spending data as of 2012 to create defense expenditure forecasts up to 2025.

Statistical approach to a SHM benchmark problem

  • Casciati, Sara
    • Smart Structures and Systems
    • /
    • v.6 no.1
    • /
    • pp.17-27
    • /
    • 2010
  • The approach to damage detection and localization adopted in this paper is based on a statistical comparison of models built from the response time histories collected at different stages during the structure lifetime. Some of these time histories are known to have been recorded when the structural system was undamaged. The consistency of the models associated to two different stages, both undamaged, is first recognized. By contrast, the method detects the discrepancies between the models from measurements collected for a damaged situation and for the undamaged reference situation. The damage detection and localization is pursued by a comparison of the SSE (sum of the squared errors) histograms. The validity of the proposed approach is tested by applying it to the analytical benchmark problem developed by the ASCE Task Group on Structural Health Monitoring (SHM). In the paper, the results of the benchmark studies are presented and the performance of the method is discussed.

Finding Significant Factors to Affect Cost Contingency on Construction Projects Using ANOVA Statistical Method -Focused on Transportation Construction Projects in the US-

  • Lhee, Sang Choon
    • Architectural research
    • /
    • v.16 no.2
    • /
    • pp.75-80
    • /
    • 2014
  • Risks, uncertainties, and associated cost overruns are critical problems for construction projects. Cost contingency is an important funding source for these unforeseen events and is included in the base estimate to help perform financially successful projects. In order to predict more accurate contingency, many empirical models using regression analysis and artificial neural network method have been proposed and showed its viability to minimize prediction errors. However, categorical factors on contingency cannot have been treated and thus considered in these empirical models since those models are able to treat only numerical factors. This paper identified potential factors on contingency in transportation construction projects and evaluated categorical factors using the one-way ANOVA statistical method. Among factors including project work type, delivery method type, contract agreement type, bid award type, letting type, and geographical location, two factors of project work type and contract agreement type were found to be statistically important on allocating cost contingency.

Modeling of Plasma Process Using Support Vector Machine (Support Vector Machine을 이용한 플라즈마 공정 모델링)

  • Kim, Min-Jae;Kim, Byung-Whan
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.211-213
    • /
    • 2006
  • In this study, plasma etching process was modeled by using support vector machine (SVM). The data used in modeling were collected from the etching of silica thin films in inductively coupled plasma. For training and testing neural network, 9 and 6 experiments were used respectively. The performance of SVM was evaluated as a function of kernel type and function type. For the kernel type, Epsilon-SVR and Nu-SVR were included. For the function type, linear, polynomial, and radial basis function (RBF) were included. The performance of SVM was optimized first in terms of kernel type, then as a function of function type. Five film characteristics were modeled by using SVM and the optimized models were compared to statistical regression models. The comparison revealed that statistical regression models yielded better predictions than SVM.

  • PDF

Discriminant Analysis of Binary Data with Multinomial Distribution by Using the Iterative Cross Entropy Minimization Estimation

  • Lee Jung Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.12 no.1
    • /
    • pp.125-137
    • /
    • 2005
  • Many discriminant analysis models for binary data have been used in real applications, but none of the classification models dominates in all varying circumstances(Asparoukhov & Krzanowski(2001)). Lee and Hwang (2003) proposed a new classification model by using multinomial distribution with the maximum entropy estimation method. The model showed some promising results in case of small number of variables, but its performance was not satisfactory for large number of variables. This paper explores to use the iterative cross entropy minimization estimation method in replace of the maximum entropy estimation. Simulation experiments show that this method can compete with other well known existing classification models.