• 제목/요약/키워드: Two-Class Support Vector Machine

검색결과 42건 처리시간 0.028초

Empirical Choice of the Shape Parameter for Robust Support Vector Machines

  • Pak, Ro-Jin
    • Communications for Statistical Applications and Methods
    • /
    • 제15권4호
    • /
    • pp.543-549
    • /
    • 2008
  • Inspired by using a robust loss function in the support vector machine regression to control training error and the idea of robust template matching with M-estimator, Chen (2004) applies M-estimator techniques to gaussian radial basis functions and form a new class of robust kernels for the support vector machines. We are specially interested in the shape of the Huber's M-estimator in this context and propose a way to find the shape parameter of the Huber's M-estimating function. For simplicity, only the two-class classification problem is considered.

The Use of MSVM and HMM for Sentence Alignment

  • Fattah, Mohamed Abdel
    • Journal of Information Processing Systems
    • /
    • 제8권2호
    • /
    • pp.301-314
    • /
    • 2012
  • In this paper, two new approaches to align English-Arabic sentences in bilingual parallel corpora based on the Multi-Class Support Vector Machine (MSVM) and the Hidden Markov Model (HMM) classifiers are presented. A feature vector is extracted from the text pair that is under consideration. This vector contains text features such as length, punctuation score, and cognate score values. A set of manually prepared training data was assigned to train the Multi-Class Support Vector Machine and Hidden Markov Model. Another set of data was used for testing. The results of the MSVM and HMM outperform the results of the length based approach. Moreover these new approaches are valid for any language pairs and are quite flexible since the feature vector may contain less, more, or different features, such as a lexical matching feature and Hanzi characters in Japanese-Chinese texts, than the ones used in the current research.

A Study on Comparison of Lung Cancer Prediction Using Ensemble Machine Learning

  • NAM, Yu-Jin;SHIN, Won-Ji
    • 한국인공지능학회지
    • /
    • 제7권2호
    • /
    • pp.19-24
    • /
    • 2019
  • Lung cancer is a chronic disease which ranks fourth in cancer incidence with 11 percent of the total cancer incidence in Korea. To deal with such issues, there is an active study on the usefulness and utilization of the Clinical Decision Support System (CDSS) which utilizes machine learning. Thus, this study reviews existing studies on artificial intelligence technology that can be used in determining the lung cancer, and conducted a study on the applicability of machine learning in determination of the lung cancer by comparison and analysis using Azure ML provided by Microsoft. The results of this study show different predictions yielded by three algorithms: Support Vector Machine (SVM), Two-Class Support Decision Jungle and Multiclass Decision Jungle. This study has its limitations in the size of the Big data used in Machine Learning. Although the data provided by Kaggle is the most suitable one for this study, it is assumed that there is a limit in learning the data fully due to the lack of absolute figures. Therefore, it is claimed that if the agency's cooperation in the subsequent research is used to compare and analyze various kinds of algorithms other than those used in this study, a more accurate screening machine for lung cancer could be created.

머신러닝 기반 한국 청소년의 자살 생각 예측 모델 (Machine learning-based Predictive Model of Suicidal Thoughts among Korean Adolescents.)

  • YeaJu JIN;HyunKi KIM
    • Journal of Korea Artificial Intelligence Association
    • /
    • 제1권1호
    • /
    • pp.1-6
    • /
    • 2023
  • This study developed models using decision forest, support vector machine, and logistic regression methods to predict and prevent suicidal ideation among Korean adolescents. The study sample consisted of 51,407 individuals after removing missing data from the raw data of the 18th (2022) Youth Health Behavior Survey conducted by the Korea Centers for Disease Control and Prevention. Analysis was performed using the MS Azure program with Two-Class Decision Forest, Two-Class Support Vector Machine, and Two-Class Logistic Regression. The results of the study showed that the decision forest model achieved an accuracy of 84.8% and an F1-score of 36.7%. The support vector machine model achieved an accuracy of 86.3% and an F1-score of 24.5%. The logistic regression model achieved an accuracy of 87.2% and an F1-score of 40.1%. Applying the logistic regression model with SMOTE to address data imbalance resulted in an accuracy of 81.7% and an F1-score of 57.7%. Although the accuracy slightly decreased, the recall, precision, and F1-score improved, demonstrating excellent performance. These findings have significant implications for the development of prediction models for suicidal ideation among Korean adolescents and can contribute to the prevention and improvement of youth suicide.

Support Vector Machine Learning for Region-Based Image Retrieval with Relevance Feedback

  • Kim, Deok-Hwan;Song, Jae-Won;Lee, Ju-Hong;Choi, Bum-Ghi
    • ETRI Journal
    • /
    • 제29권5호
    • /
    • pp.700-702
    • /
    • 2007
  • We present a relevance feedback approach based on multi-class support vector machine (SVM) learning and cluster-merging which can significantly improve the retrieval performance in region-based image retrieval. Semantically relevant images may exhibit various visual characteristics and may be scattered in several classes in the feature space due to the semantic gap between low-level features and high-level semantics in the user's mind. To find the semantic classes through relevance feedback, the proposed method reduces the burden of completely re-clustering the classes at iterations and classifies multiple classes. Experimental results show that the proposed method is more effective and efficient than the two-class SVM and multi-class relevance feedback methods.

  • PDF

불균형 데이터 학습을 위한 지지벡터기계 알고리즘 (Support Vector Machine Algorithm for Imbalanced Data Learning)

  • 김광성;황두성
    • 한국컴퓨터정보학회논문지
    • /
    • 제15권7호
    • /
    • pp.11-17
    • /
    • 2010
  • 본 논문에서는 클래스 불균형 학습을 위한 이차 최적화 문제의 해를 구하는 개선된 SMO 학습 알고리즘을 제안한다. 클래스에 서로 다른 정규화 값이 부여되는 지지벡터기계의 최적화 문제의 구현에 SMO 알고리즘이 적합하며, 제안된 알고리즘은 서로 다른 클래스에서 선택된 두 라그랑지 변수의 현재 해를 구하는 학습 단계를 반복한다. 제안된 학습 알고리즘은 UCI 벤치마킹 문제에서 테스트되어 클래스 불균형 분포를 반영하는 g-mean 평가를 이용한 일반화 성능이 SMO 알고리즘과 비교되었다. 실험 결과에서 제안된 알고리즘은 SMO에 비해 적은 클래스 데이터의 예측율을 높이고 학습시간을 단축시킬 수 있다.

Medical Image Retrieval based on Multi-class SVM and Correlated Categories Vector

  • Park, Ki-Hee;Ko, Byoung-Chul;Nam, Jae-Yeal
    • 한국통신학회논문지
    • /
    • 제34권8C호
    • /
    • pp.772-781
    • /
    • 2009
  • This paper proposes a novel algorithm for the efficient classification and retrieval of medical images. After color and edge features are extracted from medical images, these two feature vectors are then applied to a multi-class Support Vector Machine, to give membership vectors. Thereafter, the two membership vectors are combined into an ensemble feature vector. Also, to reduce the search time, Correlated Categories Vector is proposed for similarity matching. The experimental results show that the proposed system improves the retrieval performance when compared to other methods.

음성인식기 구현을 위한 SVM과 독립성분분석 기법의 적용 (Adoption of Support Vector Machine and Independent Component Analysis for Implementation of Speech Recognizer)

  • 박정원;김평환;김창근;허강인
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2164-2167
    • /
    • 2003
  • In this paper we propose effective speech recognizer through recognition experiments for three feature parameters(PCA, ICA and MFCC) using SVM(Support Vector Machine) classifier In general, SVM is classification method which classify two class set by finding voluntary nonlinear boundary in vector space and possesses high classification performance under few training data number. In this paper we compare recognition result for each feature parameter and propose ICA feature as the most effective parameter

  • PDF

Support Vector Machines을 이용한 다중 클래스 문제 해결 (Solving Multi-class Problem using Support Vector Machines)

  • 고재필
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제32권12호
    • /
    • pp.1260-1270
    • /
    • 2005
  • 최근 기계학습 분야에서 커널머신을 이용한 대표적 학습기로 Support Vector Machines (SVM)이 주목 받고 있다. SVM은 통계적 학습이론에 기반하여 뛰어난 일반화 성능을 보여주며, 다양한 패턴인식 문제에 적용되고 있다. 그러나. SVM은 이진 분류기이므로 일반적인 다중 클래스 문제에 곧바로 적용할 수 없다. SVM을 다중 클래스 문제의 하나인 얼굴인식에 도입하기 위한 방법으로는, One-Per-Class와 All-Pairs가 대표적이다. 상기 두 방법은 다중 클래스 문제를 여러 개의 이진 클래스 문제로 분할하고, 이들을 다시 종합하여 최종 결정을 내리는 출력코딩이라는 일반적인 방법에 속한다. 본 논문에서는 이진 분류기인 SVM의 다중 클래스 분류기 확장 방안으로 출력코딩 방법론을 설명한다. 또한 출력코딩 방법론의 대표적인 이론적 기반인 ECOC(Ewor-Correcting Output Codes)를 근간으로 하는 새로운 출력코딩 방법들을 제안하고, 얼굴인식 실험을 통해 SVM을 기반 분류기로 사용할 경우의, 출력코딩 방법의 특성을 비교$\cdot$분석한다.

가우시안 기반 Hyper-Rectangle 생성을 이용한 효율적 단일 분류기 (An Efficient One Class Classifier Using Gaussian-based Hyper-Rectangle Generation)

  • 김도균;최진영;고정한
    • 산업경영시스템학회지
    • /
    • 제41권2호
    • /
    • pp.56-64
    • /
    • 2018
  • In recent years, imbalanced data is one of the most important and frequent issue for quality control in industrial field. As an example, defect rate has been drastically reduced thanks to highly developed technology and quality management, so that only few defective data can be obtained from production process. Therefore, quality classification should be performed under the condition that one class (defective dataset) is even smaller than the other class (good dataset). However, traditional multi-class classification methods are not appropriate to deal with such an imbalanced dataset, since they classify data from the difference between one class and the others that can hardly be found in imbalanced datasets. Thus, one-class classification that thoroughly learns patterns of target class is more suitable for imbalanced dataset since it only focuses on data in a target class. So far, several one-class classification methods such as one-class support vector machine, neural network and decision tree there have been suggested. One-class support vector machine and neural network can guarantee good classification rate, and decision tree can provide a set of rules that can be clearly interpreted. However, the classifiers obtained from the former two methods consist of complex mathematical functions and cannot be easily understood by users. In case of decision tree, the criterion for rule generation is ambiguous. Therefore, as an alternative, a new one-class classifier using hyper-rectangles was proposed, which performs precise classification compared to other methods and generates rules clearly understood by users as well. In this paper, we suggest an approach for improving the limitations of those previous one-class classification algorithms. Specifically, the suggested approach produces more improved one-class classifier using hyper-rectangles generated by using Gaussian function. The performance of the suggested algorithm is verified by a numerical experiment, which uses several datasets in UCI machine learning repository.