• Title/Summary/Keyword: General Linear Models

Search Result 251, Processing Time 0.025 seconds

A Bayesian Approach to Linear Calibration Design Problem

  • Kim, Sung-Chul
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.20 no.3
    • /
    • pp.105-122
    • /
    • 1995
  • Based on linear models, the inference about the true measurement x$_{f}$ and the optimal designs x (nx1) for the calibration experiments are considered via Baysian statistical decision analysis. The posterior distribution of x$_{f}$ given the observation y$_{f}$ (qxl) and the calibration experiment is obtained with normal priors for x$_{f}$ and for themodel parameters (.alpha., .betha.). This posterior distribution is not in the form of any known distributions, which leads to the use of a numerical integration or an approximation for the calculation of the overall expected loss. The general structure of the expected loss function is characterized in the form of a conjecture. A near-optimal design is obtained through the approximation nof the conditional covariance matrix of the joint distribution of (x$_{f}$ , y$_{f}$ $^{T}$ )$^{T}$ . Numerical results for the univariate case are given to demonstrate the conjecture and to evaluate the approximation.n.

  • PDF

Optimal Pipe Replacement Analysis with a New Pipe Break Prediction Model (새로운 파괴예측 모델을 이용한 상수도 관의 최적 교체)

  • Park, Suwan;Loganathan, G.V.
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.16 no.6
    • /
    • pp.710-716
    • /
    • 2002
  • A General Pipe Break Prediction Model that incorporates linear and exponential models in its form is developed. The model is capable of fitting pipe break trends that have linear, exponential or in between of linear and exponential trend by using a weighting factor. The weighting factor is adjusted to obtain a best model that minimizes the sum of squared errors of the model. The model essentially plots a best curve (or a line) passing through "cumulative number of pipe breaks" versus "break times since installation of a pipe" data points. Therefore, it prevents over-predicting future number of pipe breaks compared to the conventional exponential model. The optimal replacement time equation is derived by using the Threshold Break Rate equation by Loganathan et al. (2002).

ONNEGATIVE MINIMUM BIASED ESTIMATION IN VARIANCE COMPONENT MODELS

  • Lee, Jong-Hoo
    • East Asian mathematical journal
    • /
    • v.5 no.1
    • /
    • pp.95-110
    • /
    • 1989
  • In a general variance component model, nonnegative quadratic estimators of the components of variance are considered which are invariant with respect to mean value translaion and have minimum bias (analogously to estimation theory of mean value parameters). Here the minimum is taken over an appropriate cone of positive semidefinite matrices, after having made a reduction by invariance. Among these estimators, which always exist the one of minimum norm is characterized. This characterization is achieved by systems of necessary and sufficient condition, and by a cone restricted pseudoinverse. In models where the decomposing covariance matrices span a commutative quadratic subspace, a representation of the considered estimator is derived that requires merely to solve an ordinary convex quadratic optimization problem. As an example, we present the two way nested classification random model. An unbiased estimator is derived for the mean squared error of any unbiased or biased estimator that is expressible as a linear combination of independent sums of squares. Further, it is shown that, for the classical balanced variance component models, this estimator is the best invariant unbiased estimator, for the variance of the ANOVA estimator and for the mean squared error of the nonnegative minimum biased estimator. As an example, the balanced two way nested classification model with ramdom effects if considered.

  • PDF

Negative binomial loglinear mixed models with general random effects covariance matrix

  • Sung, Youkyung;Lee, Keunbaik
    • Communications for Statistical Applications and Methods
    • /
    • v.25 no.1
    • /
    • pp.61-70
    • /
    • 2018
  • Modeling of the random effects covariance matrix in generalized linear mixed models (GLMMs) is an issue in analysis of longitudinal categorical data because the covariance matrix can be high-dimensional and its estimate must satisfy positive-definiteness. To satisfy these constraints, we consider the autoregressive and moving average Cholesky decomposition (ARMACD) to model the covariance matrix. The ARMACD creates a more flexible decomposition of the covariance matrix that provides generalized autoregressive parameters, generalized moving average parameters, and innovation variances. In this paper, we analyze longitudinal count data with overdispersion using GLMMs. We propose negative binomial loglinear mixed models to analyze longitudinal count data and we also present modeling of the random effects covariance matrix using the ARMACD. Epilepsy data are analyzed using our proposed model.

An Evaluation of the Streetscape According to the Change of Moving Speed -Through the Experiment of the Virtual Reality- (이동속도의 변화에 따른 가로경관의 평가 -Virtual Reality를 이용한 실험-)

  • 정재희
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.28 no.5
    • /
    • pp.15-25
    • /
    • 2000
  • The purpose of this paper is to examine the visual evaluation structure about the formal changes of streetscape by the different moving speed in two alternative control plans for the building height and the setback regulation. The virtual reality system is applied to the experimental tool. Eighty-two experimental models are made in consideration of the height and the setback of the building regulation cased by Midou-suji street in Osaka City, Japan. and ten typical models are selected by pre-experiment. Since the changes of the landscape structure consists of the height and the setback of the building, four the evaluation items are set: grade of continuity, order, openness, preference. As there are eighty-two landscape models which are too many to be applied in this experiment, ten role models are drawn out and used in this experiment. The mean difference test, discriminant analysis, and multiple linear regression methods had been used for the statistical analysis methods. The results of this study are as follows; 1) It is found out the fact of the difference evaluation structure amount experiments models. 2) From the sketch analysis and interview, it is found out difference cognition structure by the moving speed and alternatives. 3) From the discriminant and regression analysis, it is found out that the evaluation value about continuity becomes low by the moving speed change from walking speed to driving speed. We suggest that continuous experiment should be made with a variety of groups and models, and general and universal results should also be come out of the experiments above.

  • PDF

A Computational Efficient General Wheel-Rail Contact Detection Method

  • Pombo Joao;Ambrosio Jorge
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.spc1
    • /
    • pp.411-421
    • /
    • 2005
  • The development and implementation of an appropriate methodology for the accurate geometric description of track models is proposed in the framework of multibody dynamics and it includes the representation of the track spatial geometry and its irregularities. The wheel and rail surfaces are parameterized to represent any wheel and rail profiles obtained from direct measurements or design requirements. A fully generic methodology to determine, online during the dynamic simulation, the coordinates of the contact points, even when the most general three dimensional motion of the wheelset with respect to the rails is proposed. This methodology is applied to study specific issues in railway dynamics such as the flange contact problem and lead and lag contact configurations. A formulation for the description of the normal contact forces, which result from the wheel-rail interaction, is also presented. The tangential creep forces and moments that develop in the wheel-rail contact area are evaluated using : Kalker linear theory ; Heuristic force method ; Polach formulation. The methodology is implemented in a general multibody code. The discussion is supported through the application of the methodology to the railway vehicle ML95, used by the Lisbon metro company.

Parameter estimation for the imbalanced credit scoring data using AUC maximization (AUC 최적화를 이용한 낮은 부도율 자료의 모수추정)

  • Hong, C.S.;Won, C.H.
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.2
    • /
    • pp.309-319
    • /
    • 2016
  • For binary classification models, we consider a risk score that is a function of linear scores and estimate the coefficients of the linear scores. There are two estimation methods: one is to obtain MLEs using logistic models and the other is to estimate by maximizing AUC. AUC approach estimates are better than MLEs when using logistic models under a general situation which does not support logistic assumptions. This paper considers imbalanced data that contains a smaller number of observations in the default class than those in the non-default for credit assessment models; consequently, the AUC approach is applied to imbalanced data. Various logit link functions are used as a link function to generate imbalanced data. It is found that predicted coefficients obtained by the AUC approach are equivalent to (or better) than those from logistic models for low default probability - imbalanced data.

Parameter estimation of linear function using VUS and HUM maximization (VUS와 HUM 최적화를 이용한 선형함수의 모수추정)

  • Hong, Chong Sun;Won, Chi Hwan;Jeong, Dong Gil
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.6
    • /
    • pp.1305-1315
    • /
    • 2015
  • Consider the risk score which is a function of a linear score for the classification models. The AUC optimization method can be applied to estimate the coefficients of linear score. These estimates obtained by this AUC approach method are shown to be better than the maximum likelihood estimators using logistic models under the general situation which does not fit the logistic assumptions. In this work, the VUS and HUM approach methods are suggested by extending AUC approach method for more realistic discrimination and prediction worlds. Some simulation results are obtained with both various distributions of thresholds and three kinds of link functions such as logit, complementary log-log and modified logit functions. It is found that coefficient prediction results by using the VUS and HUM approach methods for multiple categorical classification are equivalent to or better than those by using logistic models with some link functions.

Adaptive Bilinear Lattice Filter(I)-Bilinear Lattice Structure (적응 쌍선형 격자필터(I) - 쌍선형 격자구조)

  • Heung Ki Baik
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.1
    • /
    • pp.26-33
    • /
    • 1992
  • This paper presents lattice structure of bilinear filter and the conversion equations from lattice parameters to direct-form parameters. Billnear models are attractive for adaptive filtering applications because they can approximate a large class of nonlinear systems adequately, and usually with considerable parsimony in the number of coefficients required. The lattice filter formulation transforms the nonlinear filtering problem into an equivalent multichannel linear filtering problem and then uses multichannel lattice filtering algorithms to solve the nonlinear filtering problem. The lattice filters perform a Gram-Schmidt orthogonalization of the input data and have very good easily extended to more general nonlinear output feedback structures.

  • PDF

Support vector machine for prediction of the compressive strength of no-slump concrete

  • Sobhani, J.;Khanzadi, M.;Movahedian, A.H.
    • Computers and Concrete
    • /
    • v.11 no.4
    • /
    • pp.337-350
    • /
    • 2013
  • The sensitivity of compressive strength of no-slump concrete to its ingredient materials and proportions, necessitate the use of robust models to guarantee both estimation and generalization features. It was known that the problem of compressive strength prediction owes high degree of complexity and uncertainty due to the variable nature of materials, workmanship quality, etc. Moreover, using the chemical and mineral additives, superimposes the problem's complexity. Traditionally this property of concrete is predicted by conventional linear or nonlinear regression models. In general, these models comprise lower accuracy and in most cases they fail to meet the extrapolation accuracy and generalization requirements. Recently, artificial intelligence-based robust systems have been successfully implemented in this area. In this regard, this paper aims to investigate the use of optimized support vector machine (SVM) to predict the compressive strength of no-slump concrete and compare with optimized neural network (ANN). The results showed that after optimization process, both models are applicable for prediction purposes with similar high-qualities of estimation and generalization norms; however, it was indicated that optimization and modeling with SVM is very rapid than ANN models.