• Title/Summary/Keyword: support vector regression.

Search Result 554, Processing Time 0.023 seconds

Improvement of Support Vector Clustering using Evolutionary Programming and Bootstrap

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.196-201
    • /
    • 2008
  • Statistical learning theory has three analytical tools which are support vector machine, support vector regression, and support vector clustering for classification, regression, and clustering respectively. In general, their performances are good because they are constructed by convex optimization. But, there are some problems in the methods. One of the problems is the subjective determination of the parameters for kernel function and regularization by the arts of researchers. Also, the results of the learning machines are depended on the selected parameters. In this paper, we propose an efficient method for objective determination of the parameters of support vector clustering which is the clustering method of statistical learning theory. Using evolutionary algorithm and bootstrap method, we select the parameters of kernel function and regularization constant objectively. To verify improved performances of proposed research, we compare our method with established learning algorithms using the data sets form ucr machine learning repository and synthetic data.

Multioutput LS-SVR based residual MCUSUM control chart for autocorrelated process

  • Hwang, Changha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.2
    • /
    • pp.523-530
    • /
    • 2016
  • Most classical control charts assume that processes are serially independent, and autocorrelation among variables makes them unreliable. To address this issue, a variety of statistical approaches has been employed to estimate the serial structure of the process. In this paper, we propose a multioutput least squares support vector regression and apply it to construct a residual multivariate cumulative sum control chart for detecting changes in the process mean vector. Numerical studies demonstrate that the proposed multioutput least squares support vector regression based control chart provides more satisfying results in detecting small shifts in the process mean vector.

Partially linear support vector orthogonal quantile regression with measurement errors

  • Hwang, Changha
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.1
    • /
    • pp.209-216
    • /
    • 2015
  • Quantile regression models with covariate measurement errors have received a great deal of attention in both the theoretical and the applied statistical literature. A lot of effort has been devoted to develop effective estimation methods for such quantile regression models. In this paper we propose the partially linear support vector orthogonal quantile regression model in the presence of covariate measurement errors. We also provide a generalized approximate cross-validation method for choosing the hyperparameters and the ratios of the error variances which affect the performance of the proposed model. The proposed model is evaluated through simulations.

Semiparametric Kernel Fisher Discriminant Approach for Regression Problems

  • Park, Joo-Young;Cho, Won-Hee;Kim, Young-Il
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.3 no.2
    • /
    • pp.227-232
    • /
    • 2003
  • Recently, support vector learning attracts an enormous amount of interest in the areas of function approximation, pattern classification, and novelty detection. One of the main reasons for the success of the support vector machines(SVMs) seems to be the availability of global and sparse solutions. Among the approaches sharing the same reasons for success and exhibiting a similarly good performance, we have KFD(kernel Fisher discriminant) approach. In this paper, we consider the problem of function approximation utilizing both predetermined basis functions and the KFD approach for regression. After reviewing support vector regression, semi-parametric approach for including predetermined basis functions, and the KFD regression, this paper presents an extension of the conventional KFD approach for regression toward the direction that can utilize predetermined basis functions. The applicability of the presented method is illustrated via a regression example.

Fuzzy c-Regression Using Weighted LS-SVM

  • Hwang, Chang-Ha
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.161-169
    • /
    • 2005
  • In this paper we propose a fuzzy c-regression model based on weighted least squares support vector machine(LS-SVM), which can be used to detect outliers in the switching regression model while preserving simultaneous yielding the estimates of outputs together with a fuzzy c-partitions of data. It can be applied to the nonlinear regression which does not have an explicit form of the regression function. We illustrate the new algorithm with examples which indicate how it can be used to detect outliers and fit the mixed data to the nonlinear regression models.

  • PDF

An Outlier Data Analysis using Support Vector Regression (Support Vector Regression을 이용한 이상치 데이터분석)

  • Jun, Sung-Hae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.876-880
    • /
    • 2008
  • Outliers are the observations which are very larger or smaller than most observations in the given data set. These are shown by some sources. The result of the analysis with outliers may be depended on them. In general, we do data analysis after removing outliers. But, in data mining applications such as fraud detection and intrusion detection, outliers are included in training data because they have crucial information. In regression models, simple and multiple regression models need to eliminate outliers from given training data by standadized and studentized residuals to construct good model. In this paper, we use support vector regression(SVR) based on statistical teaming theory to analyze data with outliers in regression. We verify the improved performance of our work by the experiment using synthetic data sets.

Improvement of rotor flux estimation performance of induction motor using Support Vector Machine $\epsilon$-insensitive Regression Method (Support Vector Machine $\epsilon$-insensitive Regression방법을 이용한 유도전동기의 회전자 자속추정 성능개선)

  • Han, Dong-Chang;Baek, Un-Jae;Kim, Seong-Rak;Park, Ju-Hyeon;Lee, Seok-Gyu;Park, Jeong-Il
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.43-46
    • /
    • 2003
  • In this paper, a novel rotor flux estimation method of an induction motor using support vector machine(SVM) is presented. Two veil-known different flux models with respect to voltage and current are necessary to estimate the rotor flux of an induction motor. The theory of the SVM algorithm is based on statistical teaming theory. Training of SVH leads to a quadratic programming(QP) problem. The proposed SVM rotor flux estimator guarantees the improvement of performance in the transient and steady state in spite of parameter variation circumstance. The validity and the usefulness of Proposed algorithm are throughly verified through numerical simulation.

  • PDF

A concise overview of principal support vector machines and its generalization

  • Jungmin Shin;Seung Jun Shin
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.2
    • /
    • pp.235-246
    • /
    • 2024
  • In high-dimensional data analysis, sufficient dimension reduction (SDR) has been considered as an attractive tool for reducing the dimensionality of predictors while preserving regression information. The principal support vector machine (PSVM) (Li et al., 2011) offers a unified approach for both linear and nonlinear SDR. This article comprehensively explores a variety of SDR methods based on the PSVM, which we call principal machines (PM) for SDR. The PM achieves SDR by solving a sequence of convex optimizations akin to popular supervised learning methods, such as the support vector machine, logistic regression, and quantile regression, to name a few. This makes the PM straightforward to handle and extend in both theoretical and computational aspects, as we will see throughout this article.

Estimating Fuzzy Regression with Crisp Input-Output Using Quadratic Loss Support Vector Machine

  • Hwang, Chang-Ha;Hong, Dug-Hun;Lee, Sang-Bock
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2004.10a
    • /
    • pp.53-59
    • /
    • 2004
  • Support vector machine(SVM) approach to regression can be found in information science literature. SVM implements the regularization technique which has been introduced as a way of controlling the smoothness properties of regression function. In this paper, we propose a new estimation method based on quadratic loss SVM for a linear fuzzy regression model of Tanaka's, and furthermore propose a estimation method for nonlinear fuzzy regression. This approach is a very attractive approach to evaluate nonlinear fuzzy model with crisp input and output data.

  • PDF

Support vector quantile regression for autoregressive data

  • Hwang, Hyungtae
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1539-1547
    • /
    • 2014
  • In this paper we apply the autoregressive process to the nonlinear quantile regression in order to infer nonlinear quantile regression models for the autocorrelated data. We propose a kernel method for the autoregressive data which estimates the nonlinear quantile regression function by kernel machines. Artificial and real examples are provided to indicate the usefulness of the proposed method for the estimation of quantile regression function in the presence of autocorrelation between data.