• Title/Summary/Keyword: support vector regression machine

Search Result 386, Processing Time 0.024 seconds

Hybrid Learning Algorithm for Improving Performance of Regression Support Vector Machine (회귀용 Support Vector Machine의 성능개선을 위한 조합형 학습알고리즘)

  • Jo, Yong-Hyeon;Park, Chang-Hwan;Park, Yong-Su
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.477-484
    • /
    • 2001
  • This paper proposes a hybrid learning algorithm combined momentum and kernel-adatron for improving the performance of regression support vector machine. The momentum is utilized for high-speed convergence by restraining the oscillation in the process of converging to the optimal solution, and the kernel-adatron algorithm is also utilized for the capability by working in nonlinear feature spaces and the simple implementation. The proposed algorithm has been applied to the 1-dimension and 2-dimension nonlinear function regression problems. The simulation results show that the proposed algorithm has better the learning speed and performance of the regression, in comparison with those quadratic programming and kernel-adatron algorithm.

  • PDF

Semi-supervised regression based on support vector machine

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.2
    • /
    • pp.447-454
    • /
    • 2014
  • In many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are fairly expensive to obtain. Therefore semi-supervised learning algorithms have attracted much attentions. However, previous research mainly focuses on classication problems. In this paper, a semi-supervised regression method based on support vector regression (SVR) formulation that is proposed. The estimator is easily obtained via the dual formulation of the optimization problem. The experimental results with simulated and real data suggest superior performance of the our proposed method compared with standard SVR.

Semisupervised support vector quantile regression

  • Seok, Kyungha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.2
    • /
    • pp.517-524
    • /
    • 2015
  • Unlabeled examples are easier and less expensive to be obtained than labeled examples. In this paper semisupervised approach is used to utilize such examples in an effort to enhance the predictive performance of nonlinear quantile regression problems. We propose a semisupervised quantile regression method named semisupervised support vector quantile regression, which is based on support vector machine. A generalized approximate cross validation method is used to choose the hyper-parameters that affect the performance of estimator. The experimental results confirm the successful performance of the proposed S2SVQR.

A Differential Evolution based Support Vector Clustering (차분진화 기반의 Support Vector Clustering)

  • Jun, Sung-Hae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.5
    • /
    • pp.679-683
    • /
    • 2007
  • Statistical learning theory by Vapnik consists of support vector machine(SVM), support vector regression(SVR), and support vector clustering(SVC) for classification, regression, and clustering respectively. In this algorithms, SVC is good clustering algorithm using support vectors based on Gaussian kernel function. But, similar to SVM and SVR, SVC needs to determine kernel parameters and regularization constant optimally. In general, the parameters have been determined by the arts of researchers and grid search which is demanded computing time heavily. In this paper, we propose a differential evolution based SVC(DESVC) which combines differential evolution into SVC for efficient selection of kernel parameters and regularization constant. To verify improved performance of our DESVC, we make experiments using the data sets from UCI machine learning repository and simulation.

Estimation of Software Reliability with Immune Algorithm and Support Vector Regression (면역 알고리즘 기반의 서포트 벡터 회귀를 이용한 소프트웨어 신뢰도 추정)

  • Kwon, Ki-Tae;Lee, Joon-Kil
    • Journal of Information Technology Services
    • /
    • v.8 no.4
    • /
    • pp.129-140
    • /
    • 2009
  • The accurate estimation of software reliability is important to a successful development in software engineering. Until recent days, the models using regression analysis based on statistical algorithm and machine learning method have been used. However, this paper estimates the software reliability using support vector regression, a sort of machine learning technique. Also, it finds the best set of optimized parameters applying immune algorithm, changing the number of generations, memory cells, and allele. The proposed IA-SVR model outperforms some recent results reported in the literature.

An Application of Support Vector Machines to Personal Credit Scoring: Focusing on Financial Institutions in China (Support Vector Machines을 이용한 개인신용평가 : 중국 금융기관을 중심으로)

  • Ding, Xuan-Ze;Lee, Young-Chan
    • Journal of Industrial Convergence
    • /
    • v.16 no.4
    • /
    • pp.33-46
    • /
    • 2018
  • Personal credit scoring is an effective tool for banks to properly guide decision profitably on granting loans. Recently, many classification algorithms and models are used in personal credit scoring. Personal credit scoring technology is usually divided into statistical method and non-statistical method. Statistical method includes linear regression, discriminate analysis, logistic regression, and decision tree, etc. Non-statistical method includes linear programming, neural network, genetic algorithm and support vector machine, etc. But for the development of the credit scoring model, there is no consistent conclusion to be drawn regarding which method is the best. In this paper, we will compare the performance of the most common scoring techniques such as logistic regression, neural network, and support vector machines using personal credit data of the financial institution in China. Specifically, we build three models respectively, classify the customers and compare analysis results. According to the results, support vector machine has better performance than logistic regression and neural networks.

Weighted Support Vector Machines for Heteroscedastic Regression

  • Park, Hye-Jung;Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.2
    • /
    • pp.467-474
    • /
    • 2006
  • In this paper we present a weighted support vector machine(SVM) and a weighted least squares support vector machine(LS-SVM) for the prediction in the heteroscedastic regression model. By adding weights to standard SVM and LS-SVM the better fitting ability can be achieved when errors are heteroscedastic. In the numerical studies, we illustrate the prediction performance of the proposed procedure by comparing with the procedure which combines standard SVM and LS-SVM and wild bootstrap for the prediction.

  • PDF

Support Vector Median Regression

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.14 no.1
    • /
    • pp.67-74
    • /
    • 2003
  • Median regression analysis has robustness properties which make it an attractive alternative to regression based on the mean. Support vector machine (SVM) is used widely in real-world regression tasks. In this paper, we propose a new SV median regression based on check function. And we illustrate how this proposed SVM performs and compare this with the SVM based on absolute deviation loss function.

  • PDF

Asymmetric least squares regression estimation using weighted least squares support vector machine

  • Hwan, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.999-1005
    • /
    • 2011
  • This paper proposes a weighted least squares support vector machine for asymmetric least squares regression. This method achieves nonlinear prediction power, while making no assumption on the underlying probability distributions. The cross validation function is introduced to choose optimal hyperparameters in the procedure. Experimental results are then presented which indicate the performance of the proposed model.

Least-Squares Support Vector Machine for Regression Model with Crisp Inputs-Gaussian Fuzzy Output

  • Hwang, Chang-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.2
    • /
    • pp.507-513
    • /
    • 2004
  • Least-squares support vector machine (LS-SVM) has been very successful in pattern recognition and function estimation problems for crisp data. In this paper, we propose LS-SVM approach to evaluating fuzzy regression model with multiple crisp inputs and a Gaussian fuzzy output. The proposed algorithm here is model-free method in the sense that we do not need assume the underlying model function. Experimental result is then presented which indicate the performance of this algorithm.

  • PDF