• Title/Summary/Keyword: Probabilistic Statistics

Search Result 103, Processing Time 0.021 seconds

Estimation for scale parameter of type-I extreme value distribution

  • Choi, Byungjin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.2
    • /
    • pp.535-545
    • /
    • 2015
  • In a various range of applications including hydrology, the type-I extreme value distribution has been extensively used as a probabilistic model for analyzing extreme events. In this paper, we introduce methods for estimating the scale parameter of the type-I extreme value distribution. A simulation study is performed to compare the estimators in terms of mean-squared error and bias, and the obtained results are provided.

Application of Logit Model in Qualitative Dependent Variables (로짓모형을 이용한 질적 종속변수의 분석)

  • Lee, Kil-Soon;Yu, Wann
    • Journal of Families and Better Life
    • /
    • v.10 no.1 s.19
    • /
    • pp.131-138
    • /
    • 1992
  • Regression analysis has become a standard statistical tool in the behavioral science. Because of its widespread popularity. regression has been often misused. Such is the case when the dependent variable is a qualitative measure rather than a continuous, interval measure. Regression estimates with a qualitative dependent variable does not meet the assumptions underlying regression. It can lead to serious errors in the standard statistical inference. Logit model is recommended as alternatives to the regression model for qualitative dependent variables. Researchers can employ this model to measure the relationship between independent variables and qualitative dependent variables without assuming that logit model was derived from probabilistic choice theory. Coefficients in logit model are typically estimated by the method of Maximum Likelihood Estimation in contrast to ordinary regression model which estimated by the method of Least Squares Estimation. Goodness of fit in logit model is based on the likelihood ratio statistics and the t-statistics is used for testing the null hypothesis.

  • PDF

Model-based inverse regression for mixture data

  • Choi, Changhwan;Park, Chongsun
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.1
    • /
    • pp.97-113
    • /
    • 2017
  • This paper proposes a method for sufficient dimension reduction (SDR) of mixture data. We consider mixture data containing more than one component that have distinct central subspaces. We adopt an approach of a model-based sliced inverse regression (MSIR) to the mixture data in a simple and intuitive manner. We employed mixture probabilistic principal component analysis (MPPCA) to estimate each central subspaces and cluster the data points. The results from simulation studies and a real data set show that our method is satisfactory to catch appropriate central spaces and is also robust regardless of the number of slices chosen. Discussions about root selection, estimation accuracy, and classification with initial value issues of MPPCA and its related simulation results are also provided.

Multinomial Kernel Logistic Regression via Bound Optimization Approach

  • Shim, Joo-Yong;Hong, Dug-Hun;Kim, Dal-Ho;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.14 no.3
    • /
    • pp.507-516
    • /
    • 2007
  • Multinomial logistic regression is probably the most popular representative of probabilistic discriminative classifiers for multiclass classification problems. In this paper, a kernel variant of multinomial logistic regression is proposed by combining a Newton's method with a bound optimization approach. This formulation allows us to apply highly efficient approximation methods that effectively overcomes conceptual and numerical problems of standard multiclass kernel classifiers. We also provide the approximate cross validation (ACV) method for choosing the hyperparameters which affect the performance of the proposed approach. Experimental results are then presented to indicate the performance of the proposed procedure.

A Bayesian Method for Narrowing the Scope of Variable Selection in Binary Response Logistic Regression

  • Kim, Hea-Jung;Lee, Ae-Kyung
    • Journal of Korean Society for Quality Management
    • /
    • v.26 no.1
    • /
    • pp.143-160
    • /
    • 1998
  • This article is concerned with the selection of subsets of predictor variables to be included in bulding the binary response logistic regression model. It is based on a Bayesian aproach, intended to propose and develop a procedure that uses probabilistic considerations for selecting promising subsets. This procedure reformulates the logistic regression setup in a hierarchical normal mixture model by introducing a set of hyperparameters that will be used to identify subset choices. It is done by use of the fact that cdf of logistic distribution is a, pp.oximately equivalent to that of $t_{(8)}$/.634 distribution. The a, pp.opriate posterior probability of each subset of predictor variables is obtained by the Gibbs sampler, which samples indirectly from the multinomial posterior distribution on the set of possible subset choices. Thus, in this procedure, the most promising subset of predictors can be identified as that with highest posterior probability. To highlight the merit of this procedure a couple of illustrative numerical examples are given.

  • PDF

The Study on Using Spreadsheet in Probability and Statistics Area of High School (고등학교 확률 통계 영역에서 스프레드시트 활용에 대한 연구)

  • Lee, Jong-Hak
    • School Mathematics
    • /
    • v.13 no.3
    • /
    • pp.363-384
    • /
    • 2011
  • This study is based on the recognition that the school mathematics education should reinforce the heuristic and constructional aspects related with discoveries of mathematical rules and understanding of mathematical concepts from real world situations as well as the deductive and formal aspects emphasizing on mathematical contents precisely. The 11th grade students of one class from a city high school with average were chosen. They were given time to learn various functions of Excel in regular classes of "Information Society and Computer" subject. They don't have difficulty using cells, mathematical functions and statistical functions in spreadsheet. Experiment was performed for six weeks and there were two hours of classes in a week. Considering the results of this research, teaching materials using spreadsheets play an important role in helping students to experience probabilistic and statistical reasoning and construct mathematical thinking. This implies that teaching materials using spreadsheet provide students with an opportunity to interact with probabilistic and statistical situations by adopting engineering which can encourage students to observe and experience various aspects of real world in authentic situations.

  • PDF

Asymptotic Test for Dimensionality in Sliced Inverse Regression (분할 역회귀모형에서 차원결정을 위한 점근검정법)

  • Park, Chang-Sun;Kwak, Jae-Guen
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.2
    • /
    • pp.381-393
    • /
    • 2005
  • As a promising technique for dimension reduction in regression analysis, Sliced Inverse Regression (SIR) and an associated chi-square test for dimensionality were introduced by Li (1991). However, Li's test needs assumption of Normality for predictors and found to be heavily dependent on the number of slices. We will provide a unified asymptotic test for determining the dimensionality of the SIR model which is based on the probabilistic principal component analysis and free of normality assumption on predictors. Illustrative results with simulated and real examples will also be provided.

Reliability-based assessment of damaged concrete buildings

  • Sakka, Zafer I.;Assakkaf, Ibrahim A.;Qazweeni, Jamal S.
    • Structural Engineering and Mechanics
    • /
    • v.65 no.6
    • /
    • pp.751-760
    • /
    • 2018
  • Damages in concrete structures due to aging and other factors could be a serious and immense matter. Making the best selection of the most viable and practical repairing and strengthening techniques are relatively difficult tasks using traditional methods of structural analyses. This is due to the fact that the traditional methods used for assessing aging structure are not fully capable when considering the randomness in strength, loads and cost. This paper presents a reliability-based methodology for assessing reinforced concrete members. The methodology of this study is based on probabilistic analysis, using statistics of the random variables in the performance function equations. Principles of reliability updating are used in the assessment process, as new information is taken into account and combined with prior probabilistic models. The methodology can result in a reliability index ${\beta}$ that can be used to assess the structural component by comparing its value with a standard value. In addition, these methods result in partial safety factor values that can be used for the purpose of strengthening the R/C elements of the existing structure. Calculations and computations of the reliability indices and the partial safety factors values are conducted using the First-order Reliability Method and Monte Carlo simulation.

Probabilistic Prediction of Stability of Ship by Risk Based Approach

  • Long, Zhan-Jun;Lee, Seung-Keon;Lee, Sung-Jong;Jeong, Jae-Hun
    • Journal of Navigation and Port Research
    • /
    • v.33 no.4
    • /
    • pp.255-261
    • /
    • 2009
  • Prediction of the stability for ships is very complex in reality. In this paper, risk based approach is applied to predict the probability of capsize for a certified ship, which is effected by the forces of sea especially the wave loading Safety assessment and risk analysis process are also applied for the probabilistic prediction of stability for ships. The probability of shipsencountering different waves at sea is calculated by the existed statistics data and risk based models. Finally, ship capsizing probability is calculated according to single degree of freedom(SDF) rolling differential equation and basin erosion theory of nonlinear dynamics. Calculation results show that the survival probabilities of ship excited by the forces of the seas, especially in the beam seas status, can be predicted by the risk based method.

Rapid seismic vulnerability assessment by new regression-based demand and collapse models for steel moment frames

  • Kia, M.;Banazadeh, M.;Bayat, M.
    • Earthquakes and Structures
    • /
    • v.14 no.3
    • /
    • pp.203-214
    • /
    • 2018
  • Predictive demand and collapse fragility functions are two essential components of the probabilistic seismic demand analysis that are commonly developed based on statistics with enormous, costly and time consuming data gathering. Although this approach might be justified for research purposes, it is not appealing for practical applications because of its computational cost. Thus, in this paper, Bayesian regression-based demand and collapse models are proposed to eliminate the need of time-consuming analyses. The demand model developed in the form of linear equation predicts overall maximum inter-story drift of the lowto mid-rise regular steel moment resisting frames (SMRFs), while the collapse model mathematically expressed by lognormal cumulative distribution function provides collapse occurrence probability for a given spectral acceleration at the fundamental period of the structure. Next, as an application, the proposed demand and collapse functions are implemented in a seismic fragility analysis to develop fragility and consequently seismic demand curves of three example buildings. The accuracy provided by utilization of the proposed models, with considering computation reduction, are compared with those directly obtained from Incremental Dynamic analysis, which is a computer-intensive procedure.