• Title/Summary/Keyword: Statistical Modelling

Search Result 168, Processing Time 0.031 seconds

A Note on the Efficiency Based Reliability Measures for Heterogeneous Populations

  • Cha, Ji-Hwan
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.2
    • /
    • pp.201-211
    • /
    • 2011
  • In many cases, populations in the real world are composed of different subpopulations. Furthermore, in addition to the heterogeneity in the lifetimes of items, there also could be the heterogeneity in the efficiency or performance of items. In this case, the reliability measures should be defined in a different way. In this article, we consider the mixture of stochastically ordered subpopulations. Efficiency based reliability measures are defined when the performance of items in the subpopulations has different levels. Discrete and continuous mixing models are studied. The concept of the association between the lifetime and the performance of items in subpopulations is defined. It is shown that the consideration of efficiency can change the shape of the mixture failure rate dramatically especially when the lifetime and the performance of items in subpopulations are negatively associated. Furthermore, the modelling method proposed in this paper is applied to the case when the stress levels of the operating environment of items are different.

Performance Analysis of VaR and ES Based on Extreme Value Theory

  • Yeo, Sung-Chil
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.2
    • /
    • pp.389-407
    • /
    • 2006
  • Extreme value theory has been used widely in many areas of science and engineering to deal with the assessment of extreme events which are rare but have catastrophic consequences. The potential of extreme value theory has only been recognized recently in finance area. In this paper, we provide an overview of extreme value theory for estimating and assessing value at risk and expected shortfall which are the methods for modelling and measuring the extreme financial risks. We illustrate that the approach based on extreme value theory is very useful for estimating tail related risk measures through backtesting of an empirical data.

Common Feature Analysis of Economic Time Series: An Overview and Recent Developments

  • Centoni, Marco;Cubadda, Gianluca
    • Communications for Statistical Applications and Methods
    • /
    • v.22 no.5
    • /
    • pp.415-434
    • /
    • 2015
  • In this paper we overview the literature on common features analysis of economic time series. Starting from the seminal contributions by Engle and Kozicki (1993) and Vahid and Engle (1993), we present and discuss the various notions that have been proposed to detect and model common cyclical features in macroeconometrics. In particular, we analyze in details the link between common cyclical features and the reduced-rank regression model. We also illustrate similarities and differences between the common features methodology and other popular types of multivariate time series modelling. Finally, we discuss some recent developments in this area, such as the implications of common features for univariate time series models and the analysis of common autocorrelation in medium-large dimensional systems.

Nonparametric Bayesian methods: a gentle introduction and overview

  • MacEachern, Steven N.
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.6
    • /
    • pp.445-466
    • /
    • 2016
  • Nonparametric Bayesian methods have seen rapid and sustained growth over the past 25 years. We present a gentle introduction to the methods, motivating the methods through the twin perspectives of consistency and false consistency. We then step through the various constructions of the Dirichlet process, outline a number of the basic properties of this process and move on to the mixture of Dirichlet processes model, including a quick discussion of the computational methods used to fit the model. We touch on the main philosophies for nonparametric Bayesian data analysis and then reanalyze a famous data set. The reanalysis illustrates the concept of admissibility through a novel perturbation of the problem and data, showing the benefit of shrinkage estimation and the much greater benefit of nonparametric Bayesian modelling. We conclude with a too-brief survey of fancier nonparametric Bayesian methods.

A Study on Automatic Measurement of Pronunciation Accuracy of English Speech Produced by Korean Learners of English (한국인 영어 학습자의 발음 정확성 자동 측정방법에 대한 연구)

  • Yun, Weon-Hee;Chung, Hyun-Sung;Jang, Tae-Yeoub
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.17-20
    • /
    • 2005
  • The purpose of this project is to develop a device that can automatically measure pronunciation of English speech produced by Korean learners of English. Pronunciation proficiency will be measured largely in two areas; suprasegmental and segmental areas. In suprasegmental area, intonation and word stress will be traced and compared with those of native speakers by way of statistical methods using tilt parameters. Durations of phones are also examined to measure speakers' naturalness of their pronunciations. In doing so, statistical duration modelling from a large speech database using CART will be considered. For segmental measurement of pronunciation, acoustic probability of a phone, which is a byproduct when doing the forced alignment, will be a basis of scoring pronunciation accuracy of a phone. The final score will be a feedback to the learners to improve their pronunciation.

  • PDF

Threshold-asymmetric volatility models for integer-valued time series

  • Kim, Deok Ryun;Yoon, Jae Eun;Hwang, Sun Young
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.3
    • /
    • pp.295-304
    • /
    • 2019
  • This article deals with threshold-asymmetric volatility models for over-dispersed and zero-inflated time series of count data. We introduce various threshold integer-valued autoregressive conditional heteroscedasticity (ARCH) models as incorporating over-dispersion and zero-inflation via conditional Poisson and negative binomial distributions. EM-algorithm is used to estimate parameters. The cholera data from Kolkata in India from 2006 to 2011 is analyzed as a real application. In order to construct the threshold-variable, both local constant mean which is time-varying and grand mean are adopted. It is noted via a data application that threshold model as an asymmetric version is useful in modelling count time series volatility.

Lightweight Self-consolidating Concrete with Expanded Shale Aggregates: Modelling and Optimization

  • Lotfy, Abdurrahmaan;Hossain, Khandaker M.A.;Lachemi, Mohamed
    • International Journal of Concrete Structures and Materials
    • /
    • v.9 no.2
    • /
    • pp.185-206
    • /
    • 2015
  • This paper presents statistical models developed to study the influence of key mix design parameters on the properties of lightweight self-consolidating concrete (LWSCC) with expanded shale (ESH) aggregates. Twenty LWSCC mixtures are designed and tested, where responses (properties) are evaluated to analyze influence of mix design parameters and develop the models. Such responses included slump flow diameter, V-funnel flow time, J-ring flow diameter, J-ring height difference, L-box ratio, filling capacity, sieve segregation, unit weight and compressive strength. The developed models are valid for mixes with 0.30-0.40 water-to-binder ratio, high range water reducing admixture of 0.3-1.2 % (by total content of binder) and total binder content of $410-550kg/m^3$. The models are able to identify the influential mix design parameters and their interactions which can be useful to reduce the test protocol needed for proportioning of LWSCCs. Three industrial class ESH-LWSCC mixtures are developed using statistical models and their performance is validated through test results with good agreement. The developed ESH-LWSCC mixtures are able to satisfy the European EFNARC criteria for self-consolidating concrete.

Statistical analysis issues for neuroimaging MEG data (뇌영상 MEG 데이터에 대한 통계적 분석 문제)

  • Kim, Jaehee
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.1
    • /
    • pp.161-175
    • /
    • 2022
  • Oscillatory magnetic fields produced in the brain due to neuronal activity can be measured by the sensor. Magnetoencephalography (MEG) is a non-invasive technique to record such neuronal activity due to excellent temporal and fair amount of spatial resolution, which gives information about the brain's functional activity. Potential utilization of high spatial resolution in MEG is likely to provide information related to in-depth brain functioning and underlying factors responsible for changes in neuronal waves in some diseases under resting state or task state. This review is a comprehensive report to introduce statistical models from MEG data including graphical network modelling. It is also meaningful to note that statisticians should play an important role in the brain science field.

CALIBRATION OF STELLAR PARAMETERS OF 85 PEG SYSTEM

  • Bach, Kiehunn;Kim, Yong-Cheol;Demarque, Pierre
    • Journal of Astronomy and Space Sciences
    • /
    • v.24 no.1
    • /
    • pp.31-38
    • /
    • 2007
  • We have investigated the evolutionary status of 85 Peg within the framework of standard evolutionary theory. 85 Peg has been known to be a visual and spectroscopic binary system in the solar neighborhood. In spite of the accurate information of the total mass (${\sim}1.5M_{\odot}$) and the distance (${\sim}12pc$) from the HIPPARCOS parallax, it has been undetermined an individual mass, therefore the evolved status of the system. Moreover, the coupled uncertainties of chemical composition and age, make matters worse in predicting an evolutionary status of the system. Nevertheless, we computed the various possible models for 85 Peg, and then calibrated stellar parameters by adjusting to the recent observational data. Our modelling computation has included recently updated input physics and stellar theory such as opacity, equation of state, and chemical diffusion. Through a statistical assessment, we have derived a confident parameter set as the best solution which minimized $X^{2}$ within the observational error domain. Most of all, we found that 85 Peg is not a binary system but a triple system with an unseen companion 85 Peg $B_{b}\;{\sim}0.16M_{\odot}$. The aim of the present paper is (1) to provide a complete modelling of the stellar system based on the evolutionary theory, and (2) to constrain the physical dimensions such as mass, metallicity and age.