• Title/Summary/Keyword: Least-squares Regression

Search Result 568, Processing Time 0.027 seconds

Test of the Hypothesis based on Nonlinear Regression Quantiles Estimators

  • Choi, Seung-Hoe
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.2
    • /
    • pp.153-165
    • /
    • 2003
  • This paper considers the likelihood ratio test statistic based on nonlinear regression quantiles estimators in order to test of hypothesis about the regression parameter $\theta_o$ and derives asymptotic distribution of proposed test statistic under the null hypothesis and a sequence of local alternative hypothesis. The paper also investigates asymptotic relative efficiency of the proposed test to the test based on the least squares estimators or the least absolute deviation estimators and gives some examples to illustrate the application of the main result.

  • PDF

A New Deletion Criterion of Principal Components Regression with Orientations of the Parameters

  • Lee, Won-Woo
    • Journal of the Korean Statistical Society
    • /
    • v.16 no.2
    • /
    • pp.55-70
    • /
    • 1987
  • The principal components regression is one of the substitues for least squares method when there exists multicollinearity in the multiple linear regression model. It is observed graphically that the performance of the principal components regression is strongly dependent upon the values of the parameters. Accordingly, a new deletion criterion which determines proper principal components to be deleted from the analysis is developed and its usefulness is checked by simulations.

  • PDF

Pitfalls in the Application of the COTE in a Linear Regression Model with Seasonal Data

  • Seuck Heun Song;YouSung Park
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.2
    • /
    • pp.353-358
    • /
    • 1997
  • When the disturbances in the linear repression medel are generated by a seasonal autoregressive scheme the Cochrane Orcutt transformation estimator (COTE) is a well known alternative to Generalized Least Squares estimator (GLSE). In this paper it is analyzed in which situation the Ordinary Least Squares estimator (OLSE) is always better than COTE for positive autocorrelation in terms of efficiency which is here defined as the ratio of the total variances.

  • PDF

A Robust Estimation Procedure for the Linear Regression Model

  • Kim, Bu-Yong
    • Journal of the Korean Statistical Society
    • /
    • v.16 no.2
    • /
    • pp.80-91
    • /
    • 1987
  • Minimum $L_i$ norm estimation is a robust procedure ins the sense that it leads to an estimator which has greater statistical eficiency than the least squares estimator in the presence of outliers. And the $L_1$ norm estimator has some desirable statistical properties. In this paper a new computational procedure for $L_1$ norm estimation is proposed which combines the idea of reweighted least squares method and the linear programming approach. A modification of the projective transformation method is employed to solve the linear programming problem instead of the simplex method. It is proved that the proposed algorithm terminates in a finite number of iterations.

  • PDF

Determination of Design Width for Medium Streams in the Han River Basin (한강유역의 중소하천에 대한 계획하폭 산정)

  • Jeon, Se-Jin;An, Tae-Jin;Park, Jeong-Eung
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.675-684
    • /
    • 1998
  • This paper presents the empirical formulas for determining the design-width for medium rivers in the Han river basin. The design flood, the watershed ares, and the channel slope of 216 medium rivers in the Han river basin are collected. the design width formulas are then determined by 1) the least squares (LS) method, 2)the least median squares (LMS) method, and 3) the reweighted least squares method based on the LMS (RLS). The six types of formulas are considered to determine the acceptable type for medium streams in the Han river basin. The root mean squared errors (RMSE), the absolute mean (AME) errors, and the mean errors (ME) are computed to test the formulas derived by three regression methods. It si found that the equation related stream width to the watershed area and the channel slope is acceptable for determining the design width for medium streams in the Han river basin. It is expected that the equations proposed by this study be used an index for determining the design-width for medium streams in the Han river basin.

  • PDF

On a Robust Subset Selection Procedure for the Slopes of Regression Equations

  • Song, Moon-Sup;Oh, Chang-Hyuck
    • Journal of the Korean Statistical Society
    • /
    • v.10
    • /
    • pp.105-121
    • /
    • 1981
  • The problem of selection of a subset containing the largest of several slope parameters of regression equations is considered. The proposed selection procedure is based on the weighted median estimators for regression parameters and the median of rescaled absolute residuals for scale parameters. Those estimators are compared with the classical least squares estimators by a simulation study. A Monte Carlo comparison is also made between the new procedure based on the weighted median estiamtors and the procedure based on the least squares estimators. The results show that the proposed procedure is quite robust with respect to the heaviness of distribution tails.

  • PDF

Deep LS-SVM for regression

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.827-833
    • /
    • 2016
  • In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS-SVM significantly outperforms state of the art machine learning methods on regression problems.

Robust varying coefficient model using L1 regularization

  • Hwang, Changha;Bae, Jongsik;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.1059-1066
    • /
    • 2016
  • In this paper we propose a robust version of varying coefficient models, which is based on the regularized regression with L1 regularization. We use the iteratively reweighted least squares procedure to solve L1 regularized objective function of varying coefficient model in locally weighted regression form. It provides the efficient computation of coefficient function estimates and the variable selection for given value of smoothing variable. We present the generalized cross validation function and Akaike information type criterion for the model selection. Applications of the proposed model are illustrated through the artificial examples and the real example of predicting the effect of the input variables and the smoothing variable on the output.

Sparse Kernel Regression using IRWLS Procedure

  • Park, Hye-Jung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.18 no.3
    • /
    • pp.735-744
    • /
    • 2007
  • Support vector machine(SVM) is capable of providing a more complete description of the linear and nonlinear relationships among random variables. In this paper we propose a sparse kernel regression(SKR) to overcome a weak point of SVM, which is, the steep growth of the number of support vectors with increasing the number of training data. The iterative reweighted least squares(IRWLS) procedure is used to solve the optimal problem of SKR with a Laplacian prior. Furthermore, the generalized cross validation(GCV) function is introduced to select the hyper-parameters which affect the performance of SKR. Experimental results are then presented which illustrate the performance of the proposed procedure.

  • PDF

Support Vector Quantile Regression with Weighted Quadratic Loss Function

  • Shim, Joo-Yong;Hwang, Chang-Ha
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.2
    • /
    • pp.183-191
    • /
    • 2010
  • Support vector quantile regression(SVQR) is capable of providing more complete description of the linear and nonlinear relationships among random variables. In this paper we propose an iterative reweighted least squares(IRWLS) procedure to solve the problem of SVQR with a weighted quadratic loss function. Furthermore, we introduce the generalized approximate cross validation function to select the hyperparameters which affect the performance of SVQR. Experimental results are then presented which illustrate the performance of the IRWLS procedure for SVQR.