• 제목/요약/키워드: LS-SVM

검색결과 51건 처리시간 0.034초

Deep LS-SVM for regression

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권3호
    • /
    • pp.827-833
    • /
    • 2016
  • In this paper, we propose a deep least squares support vector machine (LS-SVM) for regression problems, which consists of the input layer and the hidden layer. In the hidden layer, LS-SVMs are trained with the original input variables and the perturbed responses. For the final output, the main LS-SVM is trained with the outputs from LS-SVMs of the hidden layer as input variables and the original responses. In contrast to the multilayer neural network (MNN), LS-SVMs in the deep LS-SVM are trained to minimize the penalized objective function. Thus, the learning dynamics of the deep LS-SVM are entirely different from MNN in which all weights and biases are trained to minimize one final error function. When compared to MNN approaches, the deep LS-SVM does not make use of any combination weights, but trains all LS-SVMs in the architecture. Experimental results from real datasets illustrate that the deep LS-SVM significantly outperforms state of the art machine learning methods on regression problems.

Weighted Support Vector Machines for Heteroscedastic Regression

  • Park, Hye-Jung;Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권2호
    • /
    • pp.467-474
    • /
    • 2006
  • In this paper we present a weighted support vector machine(SVM) and a weighted least squares support vector machine(LS-SVM) for the prediction in the heteroscedastic regression model. By adding weights to standard SVM and LS-SVM the better fitting ability can be achieved when errors are heteroscedastic. In the numerical studies, we illustrate the prediction performance of the proposed procedure by comparing with the procedure which combines standard SVM and LS-SVM and wild bootstrap for the prediction.

  • PDF

LS-SVM for large data sets

  • Park, Hongrak;Hwang, Hyungtae;Kim, Byungju
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권2호
    • /
    • pp.549-557
    • /
    • 2016
  • In this paper we propose multiclassification method for large data sets by ensembling least squares support vector machines (LS-SVM) with principal components instead of raw input vector. We use the revised one-vs-all method for multiclassification, which is one of voting scheme based on combining several binary classifications. The revised one-vs-all method is performed by using the hat matrix of LS-SVM ensemble, which is obtained by ensembling LS-SVMs trained using each random sample from the whole large training data. The leave-one-out cross validation (CV) function is used for the optimal values of hyper-parameters which affect the performance of multiclass LS-SVM ensemble. We present the generalized cross validation function to reduce computational burden of leave-one-out CV functions. Experimental results from real data sets are then obtained to illustrate the performance of the proposed multiclass LS-SVM ensemble.

Prediction of unmeasured mode shapes and structural damage detection using least squares support vector machine

  • Kourehli, Seyed Sina
    • Structural Monitoring and Maintenance
    • /
    • 제5권3호
    • /
    • pp.379-390
    • /
    • 2018
  • In this paper, a novel and effective damage diagnosis algorithm is proposed to detect and estimate damage using two stages least squares support vector machine (LS-SVM) and limited number of attached sensors on structures. In the first stage, LS-SVM1 is used to predict the unmeasured mode shapes data based on limited measured modal data and in the second stage, LS-SVM2 is used to predicting the damage location and severity using the complete modal data from the first-stage LS-SVM1. The presented methods are applied to a three story irregular frame and cantilever plate. To investigate the noise effects and modeling errors, two uncertainty levels have been considered. Moreover, the performance of the proposed methods has been verified through using experimental modal data of a mass-stiffness system. The obtained damage identification results show the suitable performance of the proposed damage identification method for structures in spite of different uncertainty levels.

Software Reliability Assessment with Fuzzy Least Squares Support Vector Machine Regression

  • Hwang, Chang-Ha;Hong, Dug-Hun;Kim, Jang-Han
    • 한국지능시스템학회논문지
    • /
    • 제13권4호
    • /
    • pp.486-490
    • /
    • 2003
  • Software qualify models can predict the risk of faults in the software early enough for cost-effective prevention of problems. This paper introduces a least squares support vector machine (LS-SVM) as a fuzzy regression method for predicting fault ranges in the software under development. This LS-SVM deals with the fuzzy data with crisp inputs and fuzzy output. Predicting the exact number of bugs in software is often not necessary. This LS-SVM can predict the interval that the number of faults of the program at each session falls into with a certain possibility. A case study on software reliability problem is used to illustrate the usefulness of this LS -SVM.

Training for Huge Data set with On Line Pruning Regression by LS-SVM

  • Kim, Dae-Hak;Shim, Joo-Yong;Oh, Kwang-Sik
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2003년도 추계 학술발표회 논문집
    • /
    • pp.137-141
    • /
    • 2003
  • LS-SVM(least squares support vector machine) is a widely applicable and useful machine learning technique for classification and regression analysis. LS-SVM can be a good substitute for statistical method but computational difficulties are still remained to operate the inversion of matrix of huge data set. In modern information society, we can easily get huge data sets by on line or batch mode. For these kind of huge data sets, we suggest an on line pruning regression method by LS-SVM. With relatively small number of pruned support vectors, we can have almost same performance as regression with full data set.

  • PDF

Evaluation of soil-concrete interface shear strength based on LS-SVM

  • Zhang, Chunshun;Ji, Jian;Gui, Yilin;Kodikara, Jayantha;Yang, Sheng-Qi;He, Lei
    • Geomechanics and Engineering
    • /
    • 제11권3호
    • /
    • pp.361-372
    • /
    • 2016
  • The soil-concrete interface shear strength, although has been extensively studied, is still difficult to predict as a result of the dependence on many factors such as normal stresses, surface roughness, particle sizes, moisture contents, dilation angles of soils, etc. In this study, a well-known rigorous statistical learning approach, namely the least squares support vector machine (LS-SVM) realized in a ubiquitous spreadsheet platform is firstly used in estimating the soil-structure interface shear strength. Instead of studying the complicated mechanism, LS-SVM enables to explore the possible link between the fundamental factors and the interface shear strengths, via a sophisticated statistic approach. As a preliminary investigation, the authors study the expansive soils that are found extensively in most countries. To reduce the complexity, three major influential factors, e.g., initial moisture contents, initial dry densities and normal stresses of soils are taken into account in developing the LS-SVM models for the soil-concrete interface shear strengths. The predicted results by LS-SVM show reasonably good agreement with experimental data from direct shear tests.

Confidence Interval Estimation Using SV in LS-SVM

  • Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제14권3호
    • /
    • pp.451-459
    • /
    • 2003
  • The present paper suggests a method to estimate confidence interval using SV(Support Vector) in LS-SVM(Least-Squares Support Vector Machine). To get the proposed method we used the fact that the values of the hessian matrix obtained by full data set and SV are not different significantly. Since the suggested method implement only SV, a part of full data, we can save computing time and memory space. Through simulation study we justified the proposed method.

  • PDF

Variable selection for multiclassi cation by LS-SVM

  • Hwang, Hyung-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제21권5호
    • /
    • pp.959-965
    • /
    • 2010
  • For multiclassification, it is often the case that some variables are not important while some variables are more important than others. We propose a novel algorithm for selecting such relevant variables for multiclassification. This algorithm is base on multiclass least squares support vector machine (LS-SVM), which uses results of multiclass LS-SVM using one-vs-all method. Experimental results are then presented which indicate the performance of the proposed method.

Multiclass LS-SVM ensemble for large data

  • Hwang, Hyungtae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제26권6호
    • /
    • pp.1557-1563
    • /
    • 2015
  • Multiclass classification is typically performed using the voting scheme method based on combining binary classifications. In this paper we propose multiclass classification method for large data, which can be regarded as the revised one-vs-all method. The multiclass classification is performed by using the hat matrix of least squares support vector machine (LS-SVM) ensemble, which is obtained by aggregating individual LS-SVM trained on each subset of whole large data. The cross validation function is defined to select the optimal values of hyperparameters which affect the performance of multiclass LS-SVM proposed. We obtain the generalized cross validation function to reduce computational burden of cross validation function. Experimental results are then presented which indicate the performance of the proposed method.