• 제목/요약/키워드: generalized least squares

검색결과 158건 처리시간 0.021초

THE (R,S)-SYMMETRIC SOLUTIONS TO THE LEAST-SQUARES PROBLEM OF MATRIX EQUATION AXB = C

  • Liang, Mao-Lin;Dai, Li-Fang;Wang, San-Fu
    • Journal of applied mathematics & informatics
    • /
    • 제27권5_6호
    • /
    • pp.1061-1071
    • /
    • 2009
  • For real generalized reflexive matrices R, S, i.e., $R^T$ = R, $R^2$ = I, $S^T$ = S, $S^2$ = I, we say that real matrix X is (R,S)-symmetric, if RXS = X. In this paper, an iterative algorithm is proposed to solve the least-squares problem of matrix equation AXB = C with (R,S)-symmetric X. Furthermore, the optimal approximation solution to given matrix $X_0$ is also derived by this iterative algorithm. Finally, given numerical example and its convergent curve show that this method is feasible and efficient.

  • PDF

Pitfalls in the Application of the COTE in a Linear Regression Model with Seasonal Data

  • Seuck Heun Song;YouSung Park
    • Communications for Statistical Applications and Methods
    • /
    • 제4권2호
    • /
    • pp.353-358
    • /
    • 1997
  • When the disturbances in the linear repression medel are generated by a seasonal autoregressive scheme the Cochrane Orcutt transformation estimator (COTE) is a well known alternative to Generalized Least Squares estimator (GLSE). In this paper it is analyzed in which situation the Ordinary Least Squares estimator (OLSE) is always better than COTE for positive autocorrelation in terms of efficiency which is here defined as the ratio of the total variances.

  • PDF

Variable selection for multiclassi cation by LS-SVM

  • Hwang, Hyung-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권5호
    • /
    • pp.959-965
    • /
    • 2010
  • For multiclassification, it is often the case that some variables are not important while some variables are more important than others. We propose a novel algorithm for selecting such relevant variables for multiclassification. This algorithm is base on multiclass least squares support vector machine (LS-SVM), which uses results of multiclass LS-SVM using one-vs-all method. Experimental results are then presented which indicate the performance of the proposed method.

생존자료분석을 위한 혼합효과 최소제곱 서포트벡터기계 (Mixed effects least squares support vector machine for survival data analysis)

  • 황창하;심주용
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권4호
    • /
    • pp.739-748
    • /
    • 2012
  • 최소제곱 서포트벡터기계 (least squares support vector machine)는 분류 및 비선형 회귀분석에서 유용하게 사용되고 있는 통계적 기법이다. 본 논문에서는 각 집단별로 생존자료가 관측된 경우 적용할 수 있는 LS-SVM을 제안한다. 제안된 모형은 임의우측 중도절단자료를 비선형 회귀모형에 적용할 수 있게 Kaplan- Meier의 중도절단분포의 추정값을 이용하여 구해진 가중값을 사용하고, 집단 간의 변동을 나타내기 위하여 임의효과항을 포함한다. 벌칙상수와 커널모수의 최적값을 구하기 위하여 일반화 교차타당성함수가 사용되고 모의실험에서는 임의효과항을 포함하지 않은 LS-SVM과 성능을 비교함으로써 제안된 방법의 우수성을 보이기로 한다.

Diagnostic Study of Problems under Asymptotically Generalized Least Squares Estimation of Physical Health Model

  • Kim, Jung-Hee
    • 대한간호학회지
    • /
    • 제29권5호
    • /
    • pp.1030-1041
    • /
    • 1999
  • This study examined those problems noticed under the Asymptotically Generalized Least Squares estimator in evaluating a structural model of physical health. The problems were highly correlated parameter estimates and high standard errors of some parameter estimates. Separate analyses of the endogenous part of the model and of the metric of a latent factor revealed a highly skewed and kurtotic measurement indicator as the focal point of the manifested problems. Since the sample sizes are far below that needed to produce adequate AGLS estimates in the given modeling conditions, the adequacy of the Maximum Likelihood estimator is further examined with the robust statistics and the bootstrap method. These methods demonstrated that the ML methods were unbiased and statistical decisions based upon the ML standard errors remained almost the same. Suggestions are made for future studies adopting structural equation modeling technique in terms of selecting of a reference indicator and adopting those statistics corrected for nonormality.

  • PDF

LS-SVM for large data sets

  • Park, Hongrak;Hwang, Hyungtae;Kim, Byungju
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권2호
    • /
    • pp.549-557
    • /
    • 2016
  • In this paper we propose multiclassification method for large data sets by ensembling least squares support vector machines (LS-SVM) with principal components instead of raw input vector. We use the revised one-vs-all method for multiclassification, which is one of voting scheme based on combining several binary classifications. The revised one-vs-all method is performed by using the hat matrix of LS-SVM ensemble, which is obtained by ensembling LS-SVMs trained using each random sample from the whole large training data. The leave-one-out cross validation (CV) function is used for the optimal values of hyper-parameters which affect the performance of multiclass LS-SVM ensemble. We present the generalized cross validation function to reduce computational burden of leave-one-out CV functions. Experimental results from real data sets are then obtained to illustrate the performance of the proposed multiclass LS-SVM ensemble.

Estimation and variable selection in censored regression model with smoothly clipped absolute deviation penalty

  • Shim, Jooyong;Bae, Jongsig;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권6호
    • /
    • pp.1653-1660
    • /
    • 2016
  • Smoothly clipped absolute deviation (SCAD) penalty is known to satisfy the desirable properties for penalty functions like as unbiasedness, sparsity and continuity. In this paper, we deal with the regression function estimation and variable selection based on SCAD penalized censored regression model. We use the local linear approximation and the iteratively reweighted least squares algorithm to solve SCAD penalized log likelihood function. The proposed method provides an efficient method for variable selection and regression function estimation. The generalized cross validation function is presented for the model selection. Applications of the proposed method are illustrated through the simulated and a real example.

Censored varying coefficient regression model using Buckley-James method

  • Shim, Jooyong;Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권5호
    • /
    • pp.1167-1177
    • /
    • 2017
  • The censored regression using the pseudo-response variable proposed by Buckley and James has been one of the most well-known models. Recently, the varying coefficient regression model has received a great deal of attention as an important tool for modeling. In this paper we propose a censored varying coefficient regression model using Buckley-James method to consider situations where the regression coefficients of the model are not constant but change as the smoothing variables change. By using the formulation of least squares support vector machine (LS-SVM), the coefficient estimators of the proposed model can be easily obtained from simple linear equations. Furthermore, a generalized cross validation function can be easily derived. In this paper, we evaluated the proposed method and demonstrated the adequacy through simulate data sets and real data sets.

Partitioning likelihood method in the analysis of non-monotone missing data

  • Kim Jae-Kwang
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2004년도 학술발표논문집
    • /
    • pp.1-8
    • /
    • 2004
  • We address the problem of parameter estimation in multivariate distributions under ignorable non-monotone missing data. The factoring likelihood method for monotone missing data, termed by Robin (1974), is extended to a more general case of non-monotone missing data. The proposed method is algebraically equivalent to the Newton-Raphson method for the observed likelihood, but avoids the burden of computing the first and the second partial derivatives of the observed likelihood Instead, the maximum likelihood estimates and their information matrices for each partition of the data set are computed separately and combined naturally using the generalized least squares method. A numerical example is also presented to illustrate the method.

  • PDF

Variable selection in L1 penalized censored regression

  • Hwang, Chang-Ha;Kim, Mal-Suk;Shi, Joo-Yong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제22권5호
    • /
    • pp.951-959
    • /
    • 2011
  • The proposed method is based on a penalized censored regression model with L1-penalty. We use the iteratively reweighted least squares procedure to solve L1 penalized log likelihood function of censored regression model. It provide the efficient computation of regression parameters including variable selection and leads to the generalized cross validation function for the model selection. Numerical results are then presented to indicate the performance of the proposed method.