• 제목/요약/키워드: selection of features

검색결과 904건 처리시간 0.03초

Feature Selection for Multi-Class Support Vector Machines Using an Impurity Measure of Classification Trees: An Application to the Credit Rating of S&P 500 Companies

  • Hong, Tae-Ho;Park, Ji-Young
    • Asia pacific journal of information systems
    • /
    • 제21권2호
    • /
    • pp.43-58
    • /
    • 2011
  • Support vector machines (SVMs), a machine learning technique, has been applied to not only binary classification problems such as bankruptcy prediction but also multi-class problems such as corporate credit ratings. However, in general, the performance of SVMs can be easily worse than the best alternative model to SVMs according to the selection of predictors, even though SVMs has the distinguishing feature of successfully classifying and predicting in a lot of dichotomous or multi-class problems. For overcoming the weakness of SVMs, this study has proposed an approach for selecting features for multi-class SVMs that utilize the impurity measures of classification trees. For the selection of the input features, we employed the C4.5 and CART algorithms, including the stepwise method of discriminant analysis, which is a well-known method for selecting features. We have built a multi-class SVMs model for credit rating using the above method and presented experimental results with data regarding S&P 500 companies.

주성분 분석 로딩 벡터 기반 비지도 변수 선택 기법 (Unsupervised Feature Selection Method Based on Principal Component Loading Vectors)

  • 박영준;김성범
    • 대한산업공학회지
    • /
    • 제40권3호
    • /
    • pp.275-282
    • /
    • 2014
  • One of the most widely used methods for dimensionality reduction is principal component analysis (PCA). However, the reduced dimensions from PCA do not provide a clear interpretation with respect to the original features because they are linear combinations of a large number of original features. This interpretation problem can be overcome by feature selection approaches that identifying the best subset of given features. In this study, we propose an unsupervised feature selection method based on the geometrical information of PCA loading vectors. Experimental results from a simulation study demonstrated the efficiency and usefulness of the proposed method.

의미 기반 유전 알고리즘을 사용한 특징 선택 (Semantic-based Genetic Algorithm for Feature Selection)

  • 김정호;인주호;채수환
    • 인터넷정보학회논문지
    • /
    • 제13권4호
    • /
    • pp.1-10
    • /
    • 2012
  • 본 논문은 문서 분류의 전처리 단계인 특징 선택을 위해 의미를 고려한 최적의 특징 선택 방법을 제안한다. 특징 선택은 불필요한 특징을 제거하고 분류에 필요한 특징을 추출하는 작업으로 분류 작업에서 매우 중요한 역할을 한다. 특징 선택 기법으로 특징의 의미를 파악하여 특징을 선택하는 LSA(Latent Semantic Analysis) 기법을 사용하지만 기본 LSA는 분류 작업에 특성화 된 기법이 아니므로 지도적 학습을 통해 분류에 적합하도록 개선된 지도적 LSA를 사용한다. 지도적 LSA를 통해 선택된 특징들로부터 최적화 기법인 유전 알고리즘을 사용하여 더 최적의 특징들을 추출한다. 마지막으로, 추출한 특징들로 분류할 문서를 표현하고 SVM (Support Vector Machine)을 이용한 특정 분류기를 사용하여 분류를 수행하였다. 지도적 LSA를 통해 의미를 고려하고 유전 알고리즘을 통해 최적의 특징 집합을 찾음으로써 높은 분류 성능과 효율성을 보일 것이라 가정하였다. 인터넷 뉴스 기사를 대상으로 분류 실험을 수행한 결과 적은 수의 특징들로 높은 분류 성능을 확인할 수 있었다.

객체검출을 위한 빠르고 효율적인 Haar-Like 피쳐 선택 알고리즘 (A Fast and Efficient Haar-Like Feature Selection Algorithm for Object Detection)

  • 정병우;박기영;황선영
    • 한국통신학회논문지
    • /
    • 제38A권6호
    • /
    • pp.486-491
    • /
    • 2013
  • 본 논문은 객체검출(object detection)에 사용되는 분류기의 학습을 위한 빠르고 효율적인 Haar-like feature 선택 알고리듬을 제안한다. 기존 AdaBoost를 이용한 Haar-like feature 선택 알고리듬은 학습 샘플들에 대한 피쳐의 에러만을 고려하여 형태적으로 유사하거나 중복되는 피쳐가 선택되는 경우가 많았다. 제안하는 알고리듬은 피쳐의 형태와 피쳐간의 거리로부터 피쳐의 유사도를 계산하고 이미 선택된 피쳐와 유사도가 큰 피쳐들을 피쳐 세트에서 제거하여 빠르고 효율적인 피쳐 선택이 이루어지도록 하였다. FERET 얼굴 데이터베이스를 사용하여 제안된 알고리듬을 사용하여 학습시킨 분류기와 기존 알고리듬을 사용한 분류기의 성능을 비교하였다. 실험 결과 제안한 피쳐 선택 방법을 사용하여 학습시킨 분류기가 기존 방법을 사용한 분류기보다 향상된 성능을 보였으며, 동일한 성능을 갖도록 학습시켰을 경우 분류기의 피쳐 수가 20% 감소하였다.

Feature selection in the semivarying coefficient LS-SVR

  • Hwang, Changha;Shim, Jooyong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제28권2호
    • /
    • pp.461-471
    • /
    • 2017
  • In this paper we propose a feature selection method identifying important features in the semivarying coefficient model. One important issue in semivarying coefficient model is how to estimate the parametric and nonparametric components. Another issue is how to identify important features in the varying and the constant effects. We propose a feature selection method able to address this issue using generalized cross validation functions of the varying coefficient least squares support vector regression (LS-SVR) and the linear LS-SVR. Numerical studies indicate that the proposed method is quite effective in identifying important features in the varying and the constant effects in the semivarying coefficient model.

A Feature Selection Technique based on Distributional Differences

  • Kim, Sung-Dong
    • Journal of Information Processing Systems
    • /
    • 제2권1호
    • /
    • pp.23-27
    • /
    • 2006
  • This paper presents a feature selection technique based on distributional differences for efficient machine learning. Initial training data consists of data including many features and a target value. We classified them into positive and negative data based on the target value. We then divided the range of the feature values into 10 intervals and calculated the distribution of the intervals in each positive and negative data. Then, we selected the features and the intervals of the features for which the distributional differences are over a certain threshold. Using the selected intervals and features, we could obtain the reduced training data. In the experiments, we will show that the reduced training data can reduce the training time of the neural network by about 40%, and we can obtain more profit on simulated stock trading using the trained functions as well.

속성추출을 이용한 협동적 추천시스템의 성능 향상 (Performance Improvement of a Collaborative Recommendation System using Feature Selection)

  • 유상종;권영식
    • 산업공학
    • /
    • 제19권1호
    • /
    • pp.70-77
    • /
    • 2006
  • One of the problems in developing a collaborative recommendation system is the scalability. To alleviate the scalability problem efficiently, enhancing the performance of the recommendation system, we propose a new recommendation system using feature selection. In our experiments, the proposed system using about a third of all features shows the comparable performances when compared with using all features in light of precision, recall and number of computations, as the number of users and products increases.

Diagnosis of Alzheimer's Disease using Combined Feature Selection Method

  • Faisal, Fazal Ur Rehman;Khatri, Uttam;Kwon, Goo-Rak
    • 한국멀티미디어학회논문지
    • /
    • 제24권5호
    • /
    • pp.667-675
    • /
    • 2021
  • The treatments for symptoms of Alzheimer's disease are being provided and for the early diagnosis several researches are undergoing. In this regard, by using T1-weighted images several classification techniques had been proposed to distinguish among AD, MCI, and Healthy Control (HC) patients. In this paper, we also used some traditional Machine Learning (ML) approaches in order to diagnose the AD. This paper consists of an improvised feature selection method which is used to reduce the model complexity which accounted an issue while utilizing the ML approaches. In our presented work, combination of subcortical and cortical features of 308 subjects of ADNI dataset has been used to diagnose AD using structural magnetic resonance (sMRI) images. Three classification experiments were performed: binary classification. i.e., AD vs eMCI, AD vs lMCI, and AD vs HC. Proposed Feature Selection method consist of a combination of Principal Component Analysis and Recursive Feature Elimination method that has been used to reduce the dimension size and selection of best features simultaneously. Experiment on the dataset demonstrated that SVM is best suited for the AD vs lMCI, AD vs HC, and AD vs eMCI classification with the accuracy of 95.83%, 97.83%, and 97.87% respectively.

Neighborhood 러프집합 모델을 활용한 유방 종양의 진단적 특징 선택 (A Diagnostic Feature Subset Selection of Breast Tumor Based on Neighborhood Rough Set Model)

  • 손창식;최락현;강원석;이종하
    • 한국산업정보학회논문지
    • /
    • 제21권6호
    • /
    • pp.13-21
    • /
    • 2016
  • 특징선택은 데이터 마이닝, 기계학습 분야에서 가장 중요한 이슈 중 하나로, 원본 데이터에서 가장 좋은 분류 성능을 보여줄 수 있는 특징들을 찾아내는 방법이다. 본 논문에서는 정보 입자성을 기반으로 한 neighborhood 러프집합 모델을 이용한 특징선택 방법을 제안한다. 제안된 방법의 효과성은 5,252명의 유방 초음파 영상으로부터 추출된 298가지의 특징들 중에서 유방 종양의 진단과 관련된 유용한 특징들을 선택하는 문제에 적용되었다. 실험결과 19가지의 진단적 특징을 찾을 수 있었고, 이때에 평균 분류 정확성은 97.6%를 보였다.

Microblog User Geolocation by Extracting Local Words Based on Word Clustering and Wrapper Feature Selection

  • Tian, Hechan;Liu, Fenlin;Luo, Xiangyang;Zhang, Fan;Qiao, Yaqiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.3972-3988
    • /
    • 2020
  • Existing methods always rely on statistical features to extract local words for microblog user geolocation. There are many non-local words in extracted words, which makes geolocation accuracy lower. Considering the statistical and semantic features of local words, this paper proposes a microblog user geolocation method by extracting local words based on word clustering and wrapper feature selection. First, ordinary words without positional indications are initially filtered based on statistical features. Second, a word clustering algorithm based on word vectors is proposed. The remaining semantically similar words are clustered together based on the distance of word vectors with semantic meanings. Next, a wrapper feature selection algorithm based on sequential backward subset search is proposed. The cluster subset with the best geolocation effect is selected. Words in selected cluster subset are extracted as local words. Finally, the Naive Bayes classifier is trained based on local words to geolocate the microblog user. The proposed method is validated based on two different types of microblog data - Twitter and Weibo. The results show that the proposed method outperforms existing two typical methods based on statistical features in terms of accuracy, precision, recall, and F1-score.