• Title/Summary/Keyword: svm

Search Result 2,150, Processing Time 0.025 seconds

Improving SVM Classification by Constructing Ensemble (앙상블 구성을 이용한 SVM 분류성능의 향상)

  • 제홍모;방승양
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.251-258
    • /
    • 2003
  • A support vector machine (SVM) is supposed to provide a good generalization performance, but the actual performance of a actually implemented SVM is often far from the theoretically expected level. This is largely because the implementation is based on an approximated algorithm, due to the high complexity of time and space. To improve this limitation, we propose ensemble of SVMs by using Bagging (bootstrap aggregating) and Boosting. By a Bagging stage each individual SVM is trained independently using randomly chosen training samples via a bootstrap technique. By a Boosting stage an individual SVM is trained by choosing training samples according to their probability distribution. The probability distribution is updated by the error of independent classifiers, and the process is iterated. After the training stage, they are aggregated to make a collective decision in several ways, such ai majority voting, the LSE(least squares estimation) -based weighting, and double layer hierarchical combining. The simulation results for IRIS data classification, the hand-written digit recognition and Face detection show that the proposed SVM ensembles greatly outperforms a single SVM in terms of classification accuracy.

Target Classification Algorithm Using Complex-valued Support Vector Machine (복소수 SVM을 이용한 목표물 식별 알고리즘)

  • Kang, Youn Joung;Lee, Jaeil;Bae, Jinho;Lee, Chong Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.4
    • /
    • pp.182-188
    • /
    • 2013
  • In this paper, we propose a complex-valued support vector machine (SVM) classifier which process the complex valued signal measured by pulse doppler radar (PDR) to identify moving targets from the background. SVM is widely applied in the field of pattern recognition, but features which used to classify are almost real valued data. Proposed complex-valued SVM can classify the moving target using real valued data, imaginary valued data, and cross-information data. To design complex-valued SVM, we consider slack variables of real and complex axis, and use the KKT (Karush-Kuhn-Tucker) conditions for complex data. Also we apply radial basis function (RBF) as a kernel function which use a distance of complex values. To evaluate the performance of the complex-valued SVM, complex valued data from PDR were classified using real-valued SVM and complex-valued SVM. The proposed complex-valued SVM classification was improved compared to real-valued SVM for dog and human, respectively 8%, 10%, have been improved.

Multi-Class SVM+MTL for the Prediction of Corporate Credit Rating with Structured Data

  • Ren, Gang;Hong, Taeho;Park, YoungKi
    • Asia pacific journal of information systems
    • /
    • v.25 no.3
    • /
    • pp.579-596
    • /
    • 2015
  • Many studies have focused on the prediction of corporate credit rating using various data mining techniques. One of the most frequently used algorithms is support vector machines (SVM), and recently, novel techniques such as SVM+ and SVM+MTL have emerged. This paper intends to show the applicability of such new techniques to multi-classification and corporate credit rating and compare them with conventional SVM regarding prediction performance. We solve multi-class SVM+ and SVM+MTL problems by constructing several binary classifiers. Furthermore, to demonstrate the robustness and outstanding performance of SVM+MTL algorithm over other techniques, we utilized four typical multi-class processing methods in our experiments. The results show that SVM+MTL outperforms both conventional SVM and novel SVM+ in predicting corporate credit rating. This study contributes to the literature by showing the applicability of new techniques such as SVM+ and SVM+MTL and the outperformance of SVM+MTL over conventional techniques. Thus, this study enriches solving techniques for addressing multi-class problems such as corporate credit rating prediction.

Speaker Verification Using SVM Kernel with GMM-Supervector Based on the Mahalanobis Distance (Mahalanobis 거리측정 방법 기반의 GMM-Supervector SVM 커널을 이용한 화자인증 방법)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.216-221
    • /
    • 2010
  • In this paper, we propose speaker verification method using Support Vector Machine (SVM) kernel with Gaussian Mixture Model (GMM)-supervector based on the Mahalanobis distance. The proposed GMM-supervector SVM kernel method is combined GMM with SVM. The GMM-supervectors are generated by GMM parameters of speaker and other speaker utterances. A speaker verification threshold of GMM-supervectors is decided by SVM kernel based on Mahalanobis distance to improve speaker verification accuracy. The experimental results for text-independent speaker verification using 20 speakers demonstrates the performance of the proposed method compared to GMM, SVM, GMM-supervector SVM kernel based on Kullback-Leibler (KL) divergence, and GMM-supervector SVM kernel based on Bhattacharyya distance.

An Experimental Study on Text Categorization using an SVM Classifier (SVM 분류기를 이용한 문서 범주화 연구)

  • 정영미;임혜영
    • Journal of the Korean Society for information Management
    • /
    • v.17 no.4
    • /
    • pp.229-248
    • /
    • 2000
  • Among several learning algorithms for lexl calegoriration. SVM(Snpport Vsctor Machines) has been provcd to ouq~e~fotm other classifiers. Th~study e~~aluales the categarizalion ability of en SVM classifier using the ModApte split of the Reutcrs-21578 dataset. First. an experiment 1s perlormed to test a few feature wetghtlng schemes that will be used in thc calegarization tasks. Second, (he categorization periarrnances of the lulear SVM and the non-linear SVM are compared. Finally. the binary SVM classifier is expanded into a multi-class classifier and thek pcrforrnnnces are comparativcly evaluated.

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Predicting Defect-Prone Software Module Using GA-SVM (GA-SVM을 이용한 결함 경향이 있는 소프트웨어 모듈 예측)

  • Kim, Young-Ok;Kwon, Ki-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.1
    • /
    • pp.1-6
    • /
    • 2013
  • For predicting defect-prone module in software, SVM classifier showed good performance in a previous research. But there are disadvantages that SVM parameter should be chosen differently for every kernel, and algorithm should be performed iteratively for predict results of changed parameter. Therefore, we find these parameters using Genetic Algorithm and compare with result of classification by Backpropagation Algorithm. As a result, the performance of GA-SVM model is better.

PoMEN based Latent One-Class SVM (PoMEN 기반의 Latent One-Class SVM)

  • Lee, Changki
    • Annual Conference on Human and Language Technology
    • /
    • 2012.10a
    • /
    • pp.8-11
    • /
    • 2012
  • One-class SVM은 데이터가 존재하는 영역을 추출하고, 이 영역을 서포트 벡터로 표현하며 표현된 영역 밖의 데이터들은 아웃라이어(outlier)로 간주된다. 본 논문에서는 데이터 포인트마다 숨겨진 변수(hidden variable) 혹은 토픽이 있다고 가정하고, 이를 반영하기 위해 PoMEN에 기반한 Latent One-class SVM을 제안한다. 실험결과 Latent One-class SVM이 대부분의 구간에서 One-class SVM 보다 성능이 높았으며, 특히 높은 정확율을 요구하는 경우에 더욱 효과적임을 알 수 있었다.

  • PDF

Fast Training of Structured SVM Using Fixed-Threshold Sequential Minimal Optimization

  • Lee, Chang-Ki;Jang, Myung-Gil
    • ETRI Journal
    • /
    • v.31 no.2
    • /
    • pp.121-128
    • /
    • 2009
  • In this paper, we describe a fixed-threshold sequential minimal optimization (FSMO) for structured SVM problems. FSMO is conceptually simple, easy to implement, and faster than the standard support vector machine (SVM) training algorithms for structured SVM problems. Because FSMO uses the fact that the formulation of structured SVM has no bias (that is, the threshold b is fixed at zero), FSMO breaks down the quadratic programming (QP) problems of structured SVM into a series of smallest QP problems, each involving only one variable. By involving only one variable, FSMO is advantageous in that each QP sub-problem does not need subset selection. For the various test sets, FSMO is as accurate as an existing structured SVM implementation (SVM-Struct) but is much faster on large data sets. The training time of FSMO empirically scales between O(n) and O($n^{1.2}$), while SVM-Struct scales between O($n^{1.5}$) and O($n^{1.8}$).

  • PDF

SVM based Stock Price Forecasting Using Financial Statements (SVM 기반의 재무 정보를 이용한 주가 예측)

  • Heo, Junyoung;Yang, Jin Yong
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.3
    • /
    • pp.167-172
    • /
    • 2015
  • Machine learning is a technique for training computers to be used in classification or forecasting. Among the various types, support vector machine (SVM) is a fast and reliable machine learning mechanism. In this paper, we evaluate the stock price predictability of SVM based on financial statements, through a fundamental analysis predicting the stock price from the corporate intrinsic values. Corporate financial statements were used as the input for SVM. Based on the results, the rise or drop of the stock was predicted. The SVM results were compared with the forecasts of experts, as well as other machine learning methods such as ANN, decision tree and AdaBoost. SVM showed good predictive power while requiring less execution time than the other machine learning schemes.