• 제목/요약/키워드: Ridge regression

검색결과 118건 처리시간 0.027초

수확예측(收穫豫測) Model의 Multicollinearity 문제점(問題點) 해결(解決)을 위(爲)한 Ridge Regression의 이용(利用) (The Use Ridge Regression for Yield Prediction Models with Multicollinearity Problems)

  • 신만용
    • 한국산림과학회지
    • /
    • 제79권3호
    • /
    • pp.260-268
    • /
    • 1990
  • 수확(收穫) 예측(豫測) model이 multicollinearity 문제점(問題點) 가질때 보다 정확한 추정식(推定式)을 얻기 위하여 두 종류의 ridge estimator와 최소(最小) 자승법(自乘法)(OLS)의 추정치를 비교(比較)하였다. 본 연구(硏究)에서 사용(使用)된 ridge estmator는 Mallows's (1973)Cp-like statistic과 Allens's (1974) PRESS-like statistic 이었다. 위의 세가지 estimator 예측(豫測) 능력(能力) 평가(評賣)는 Matney 등(等)(1988)에 의하여 개발(開發)된 수확(收穫) model을 이용(利用)하여 비교(比較)하였다. 사용되어진 자료(資料)는 미국(美國) 남부(南部) 테에다 소나무 시험림(試驗林)의 총(總)522개(個) plot을 이용(利用)하였다. 두 개(個)의 ridge estimator가 최소(最小) 자승법(自乘法)에 의한 추정치 보다 수확(收穫) 예측(豫測) 능력(能力)이 우수(優秀)하였으며, 특히 Mallows's statistic에 의한 ridge estimator가 가장 우수(優秀)하였다. 따라서 ridge estimator는 수확(收穫) 예측(豫測) model의 독립(獨立) 변수(變數) 간(間)에 multicollinearity 문제점(問題點)이 있을 때 최소(最小) 자승법(自乘法)에 의 한 추정치를 대치(代置)할 수 있는 estimator로서 추천(推薦)할 수 있었다.

  • PDF

Speed-up of the Matrix Computation on the Ridge Regression

  • Lee, Woochan;Kim, Moonseong;Park, Jaeyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3482-3497
    • /
    • 2021
  • Artificial intelligence has emerged as the core of the 4th industrial revolution, and large amounts of data processing, such as big data technology and rapid data analysis, are inevitable. The most fundamental and universal data interpretation technique is an analysis of information through regression, which is also the basis of machine learning. Ridge regression is a technique of regression that decreases sensitivity to unique or outlier information. The time-consuming calculation portion of the matrix computation, however, basically includes the introduction of an inverse matrix. As the size of the matrix expands, the matrix solution method becomes a major challenge. In this paper, a new algorithm is introduced to enhance the speed of ridge regression estimator calculation through series expansion and computation recycle without adopting an inverse matrix in the calculation process or other factorization methods. In addition, the performances of the proposed algorithm and the existing algorithm were compared according to the matrix size. Overall, excellent speed-up of the proposed algorithm with good accuracy was demonstrated.

An Explicit Solution for Multivariate Ridge Regression

  • Shin, Min-Woong;Park, Sung H.
    • Journal of the Korean Statistical Society
    • /
    • 제11권1호
    • /
    • pp.59-68
    • /
    • 1982
  • We propose that, in order to control the inflation and general instability associated with the least squares estimates, we can use the ridge estimator $$ \hat{B}^* = (X'X+kI)^{-1}X'Y : k \leq 0$$ for the regression coefficients B in multivariate regression. Our hope is that by accepting some bias, we can achieve a larger reduction in variance. We show that such a k always exists and we derive the formula obtaining k in multivariate ridge regression.

  • PDF

Kernel Ridge Regression with Randomly Right Censored Data

  • Shim, Joo-Yong;Seok, Kyung-Ha
    • Communications for Statistical Applications and Methods
    • /
    • 제15권2호
    • /
    • pp.205-211
    • /
    • 2008
  • This paper deals with the estimations of kernel ridge regression when the responses are subject to randomly right censoring. The iterative reweighted least squares(IRWLS) procedure is employed to treat censored observations. The hyperparameters of model which affect the performance of the proposed procedure are selected by a generalized cross validation(GCV) function. Experimental results are then presented which indicate the performance of the proposed procedure.

On Predicting with Kernel Ridge Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권1호
    • /
    • pp.103-111
    • /
    • 2003
  • Kernel machines are used widely in real-world regression tasks. Kernel ridge regressions(KRR) and support vector machines(SVM) are typical kernel machines. Here, we focus on two types of KRR. One is inductive KRR. The other is transductive KRR. In this paper, we study how differently they work in the interpolation and extrapolation areas. Furthermore, we study prediction interval estimation method for KRR. This turns out to be a reliable and practical measure of prediction interval and is essential in real-world tasks.

  • PDF

자연어 처리 기반 『상한론(傷寒論)』 변병진단체계(辨病診斷體系) 분류를 위한 기계학습 모델 선정 (Selecting Machine Learning Model Based on Natural Language Processing for Shanghanlun Diagnostic System Classification)

  • 김영남
    • 대한상한금궤의학회지
    • /
    • 제14권1호
    • /
    • pp.41-50
    • /
    • 2022
  • Objective : The purpose of this study is to explore the most suitable machine learning model algorithm for Shanghanlun diagnostic system classification using natural language processing (NLP). Methods : A total of 201 data items were collected from 『Shanghanlun』 and 『Clinical Shanghanlun』, 'Taeyangbyeong-gyeolhyung' and 'Eumyangyeokchahunobokbyeong' were excluded to prevent oversampling or undersampling. Data were pretreated using a twitter Korean tokenizer and trained by logistic regression, ridge regression, lasso regression, naive bayes classifier, decision tree, and random forest algorithms. The accuracy of the models were compared. Results : As a result of machine learning, ridge regression and naive Bayes classifier showed an accuracy of 0.843, logistic regression and random forest showed an accuracy of 0.804, and decision tree showed an accuracy of 0.745, while lasso regression showed an accuracy of 0.608. Conclusions : Ridge regression and naive Bayes classifier are suitable NLP machine learning models for the Shanghanlun diagnostic system classification.

  • PDF

Ridge Regressive Bilinear Model을 이용한 조명 변화에 강인한 얼굴 인식 (Illumination Robust Face Recognition using Ridge Regressive Bilinear Models)

  • 신동수;김대진;방승양
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제34권1호
    • /
    • pp.70-78
    • /
    • 2007
  • 얼굴 인식 시스템의 성능은 조명 변화로 인하여 발생하는 개인내 (intra-person) 차이가 개인간 (inter-person)의 차이보다 클 수 있기 때문에 조명 변화에 많은 영향을 받는다. 본 연구에서는 이러한 문제를 해결하기 위해서 대칭형 bilinear 모델을 이용하여 조명 요소와 신원 요소를 분리하는 방법을 제안한다. Bilinear 모델로 조명 요소와 신원 요소를 얻기 위한 translation 과정은 반복적 역행렬을 구하는 것이 요구되는데 입력 데이타에 따라 수렴하지 않는 경우가 발생할 수 있다. 이러한 문제를 완화하기 위해서 ridge regression 모델과 bilinear 모델을 결합한 ridge regressive bilinear 모델을 제안하였다. 제안된 모델은 조명 요소와 신원 요소의 분산을 적절히 줄여줌으로서 bilinear 모델에 안정성을 제공하며, 인식에 더 많은 고차원 요소 정보를 이용하게 함으로써 인식 성능을 높여 준다. 실험 결과에서 제안한 ridge regressive bilinear 모델이 bilinear 모델, 고유얼굴(eigenface) 방법, Quotient image 보다 좋은 인식 성능을 보여줌을 확인 할 수 있다.

유전알고리즘을 이용한 능형회귀모형의 검정 : 빈도별 홍수량의 지역분석을 대상으로 (Calibration of the Ridge Regression Model with the Genetic Algorithm:Study on the Regional Flood Frequency Analysis)

  • 성기원
    • 한국수자원학회논문집
    • /
    • 제31권1호
    • /
    • pp.59-69
    • /
    • 1998
  • 빈도별 홍수량의 지역분석을 위하여 유역의 지형특성을 독립변수로 이용하는 회귀모형을 검정하였다. 그런데 이들 독립변수들간의 상관관계가 존재할 경우 능형회귀모형이 이용되기도 하는 이 방법은 다중공선성 문제를 극복하는데 적합한 방법으로 알려져 있다. 능형회귀모형을 최적화하기 위해서는 조정변수가 포함되는 비용함수를 최소화하여야 한다. 본 연구에서는 이 최적화를 위하여 유전알고리즘을 이용하였다. 유전알고리즘은 자연 생물의 유전 및 진화과정을 모방한 추계학적 탐색방법을 말한다. 이러한 유전알고리즘을 이용하여 지역분석 모형을 검정한 결과 안정된 매개변수의 가중치를 얻을 수 있었다.

  • PDF

Estimation of error variance in nonparametric regression under a finite sample using ridge regression

  • Park, Chun-Gun
    • Journal of the Korean Data and Information Science Society
    • /
    • 제22권6호
    • /
    • pp.1223-1232
    • /
    • 2011
  • Tong and Wang's estimator (2005) is a new approach to estimate the error variance using least squares method such that a simple linear regression is asymptotically derived from Rice's lag- estimator (1984). Their estimator highly depends on the setting of a regressor and weights in small sample sizes. In this article, we propose a new approach via a local quadratic approximation to set regressors in a small sample case. We estimate the error variance as the intercept using a ridge regression because the regressors have the problem of multicollinearity. From the small simulation study, the performance of our approach with some existing methods is better in small sample cases and comparable in large cases. More research is required on unequally spaced points.

Study on the ensemble methods with kernel ridge regression

  • Kim, Sun-Hwa;Cho, Dae-Hyeon;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권2호
    • /
    • pp.375-383
    • /
    • 2012
  • The purpose of the ensemble methods is to increase the accuracy of prediction through combining many classifiers. According to recent studies, it is proved that random forests and forward stagewise regression have good accuracies in classification problems. However they have great prediction error in separation boundary points because they used decision tree as a base learner. In this study, we use the kernel ridge regression instead of the decision trees in random forests and boosting. The usefulness of our proposed ensemble methods was shown by the simulation results of the prostate cancer and the Boston housing data.