• 제목/요약/키워드: feature subset selection

검색결과 85건 처리시간 0.02초

단변량 분석과 LVF 알고리즘을 결합한 하이브리드 속성선정 방법 (A Hybrid Feature Selection Method using Univariate Analysis and LVF Algorithm)

  • 이재식;정미경
    • 지능정보연구
    • /
    • 제14권4호
    • /
    • pp.179-200
    • /
    • 2008
  • 본 연구에서는 사례기반 추론 기법을 대상으로 효율성과 효과성을 함께 증진시킬 수 있는 속성선정 방법을 개발하였다. 기본적으로, 본 연구에서 개발한 속성선정 방법은 기존에 개발된 단변량 분석 방법과 LVF 알고리즘을 통합하는 것이다. 먼저, 단변량 분석 방법 중 선택효과를 사용하여 전체 속성 중에서 예측력이 우수하다고 판단되는 일부분의 속성들을 추려낸다. 이 속성들로부터 생성해낼 수 있는 모든 가능한 부분집합을 생성해낸 후에, LVF 알고리즘을 이용하여 이 부분집합들이 가지는 불일치 비율을 평가함으로써 최종적으로 속성 부분집합을 선정한다. 본 연구에서 개발한 속성선정 방법을 UCI에서 제공하는 데이터 집합들에 적용하여 성능을 측정한 후, 기존 기법의 성능들과 비교한 결과, 본 연구에서 개발된 속성선정 방법이 선정된 속성의 개수도 만족할만하고 적중률도 향상되어서, 효율성과 효과성 모두의 측면에서 우수함을 보였다.

  • PDF

Comparison of Feature Selection Processes for Image Retrieval Applications

  • Choi, Young-Mee;Choo, Moon-Won
    • 한국멀티미디어학회논문지
    • /
    • 제14권12호
    • /
    • pp.1544-1548
    • /
    • 2011
  • A process of choosing a subset of original features, so called feature selection, is considered as a crucial preprocessing step to image processing applications. There are already large pools of techniques developed for machine learning and data mining fields. In this paper, basically two methods, non-feature selection and feature selection, are investigated to compare their predictive effectiveness of classification. Color co-occurrence feature is used for defining image features. Standard Sequential Forward Selection algorithm are used for feature selection to identify relevant features and redundancy among relevant features. Four color spaces, RGB, YCbCr, HSV, and Gaussian space are considered for computing color co-occurrence features. Gray-level image feature is also considered for the performance comparison reasons. The experimental results are presented.

단백체 스펙트럼 데이터의 분류를 위한 랜덤 포리스트 기반 특성 선택 알고리즘 (Feature Selection for Classification of Mass Spectrometric Proteomic Data Using Random Forest)

  • 온승엽;지승도;한미영
    • 한국시뮬레이션학회논문지
    • /
    • 제22권4호
    • /
    • pp.139-147
    • /
    • 2013
  • 본 논문에서는 질량 분석 방법에 의하여 산출된 단백체 데이터(mass spectrometric proteomic data)의 분류 분석(classification analysis)을 위한 새로운 특성 선택(feature selection) 방법을 제안한다. 이 방법은 i)높은 상관관계를 가지는 중복된 특성을 효과적으로 제거하는 전처리 단계와 ii)토너먼트(tournament) 전략을 사용하여 최적 특성 부분집합(optimal feature subset)을 탐색해 내는 단계로 구성되어 있다. 제안되는 방법을 실제 암진단에 사용되는 공개된 혈액 단백체 데이터에 적용하였으며 널리 사용되는 타 방법과 비교할 때 우수한 성능과 균형된 특이도와 민감도를 달성함을 실증하였다.

Effective Multi-label Feature Selection based on Large Offspring Set created by Enhanced Evolutionary Search Process

  • Lim, Hyunki;Seo, Wangduk;Lee, Jaesung
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권9호
    • /
    • pp.7-13
    • /
    • 2018
  • Recent advancement in data gathering technique improves the capability of information collecting, thus allowing the learning process between gathered data patterns and application sub-tasks. A pattern can be associated with multiple labels, demanding multi-label learning capability, resulting in significant attention to multi-label feature selection since it can improve multi-label learning accuracy. However, existing evolutionary multi-label feature selection methods suffer from ineffective search process. In this study, we propose a evolutionary search process for the task of multi-label feature selection problem. The proposed method creates large set of offspring or new feature subsets and then retains the most promising feature subset. Experimental results demonstrate that the proposed method can identify feature subsets giving good multi-label classification accuracy much faster than conventional methods.

Feature Selection Based on Bi-objective Differential Evolution

  • Das, Sunanda;Chang, Chi-Chang;Das, Asit Kumar;Ghosh, Arka
    • Journal of Computing Science and Engineering
    • /
    • 제11권4호
    • /
    • pp.130-141
    • /
    • 2017
  • Feature selection is one of the most challenging problems of pattern recognition and data mining. In this paper, a feature selection algorithm based on an improved version of binary differential evolution is proposed. The method simultaneously optimizes two feature selection criteria, namely, set approximation accuracy of rough set theory and relational algebra based derived score, in order to select the most relevant feature subset from an entire feature set. Superiority of the proposed method over other state-of-the-art methods is confirmed by experimental results, which is conducted over seven publicly available benchmark datasets of different characteristics such as a low number of objects with a high number of features, and a high number of objects with a low number of features.

Feature Selection Using Submodular Approach for Financial Big Data

  • Attigeri, Girija;Manohara Pai, M.M.;Pai, Radhika M.
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1306-1325
    • /
    • 2019
  • As the world is moving towards digitization, data is generated from various sources at a faster rate. It is getting humungous and is termed as big data. The financial sector is one domain which needs to leverage the big data being generated to identify financial risks, fraudulent activities, and so on. The design of predictive models for such financial big data is imperative for maintaining the health of the country's economics. Financial data has many features such as transaction history, repayment data, purchase data, investment data, and so on. The main problem in predictive algorithm is finding the right subset of representative features from which the predictive model can be constructed for a particular task. This paper proposes a correlation-based method using submodular optimization for selecting the optimum number of features and thereby, reducing the dimensions of the data for faster and better prediction. The important proposition is that the optimal feature subset should contain features having high correlation with the class label, but should not correlate with each other in the subset. Experiments are conducted to understand the effect of the various subsets on different classification algorithms for loan data. The IBM Bluemix BigData platform is used for experimentation along with the Spark notebook. The results indicate that the proposed approach achieves considerable accuracy with optimal subsets in significantly less execution time. The algorithm is also compared with the existing feature selection and extraction algorithms.

고차원 범주형 자료를 위한 비지도 연관성 기반 범주형 변수 선택 방법 (Association-based Unsupervised Feature Selection for High-dimensional Categorical Data)

  • 이창기;정욱
    • 품질경영학회지
    • /
    • 제47권3호
    • /
    • pp.537-552
    • /
    • 2019
  • Purpose: The development of information technology makes it easy to utilize high-dimensional categorical data. In this regard, the purpose of this study is to propose a novel method to select the proper categorical variables in high-dimensional categorical data. Methods: The proposed feature selection method consists of three steps: (1) The first step defines the goodness-to-pick measure. In this paper, a categorical variable is relevant if it has relationships among other variables. According to the above definition of relevant variables, the goodness-to-pick measure calculates the normalized conditional entropy with other variables. (2) The second step finds the relevant feature subset from the original variables set. This step decides whether a variable is relevant or not. (3) The third step eliminates redundancy variables from the relevant feature subset. Results: Our experimental results showed that the proposed feature selection method generally yielded better classification performance than without feature selection in high-dimensional categorical data, especially as the number of irrelevant categorical variables increase. Besides, as the number of irrelevant categorical variables that have imbalanced categorical values is increasing, the difference in accuracy between the proposed method and the existing methods being compared increases. Conclusion: According to experimental results, we confirmed that the proposed method makes it possible to consistently produce high classification accuracy rates in high-dimensional categorical data. Therefore, the proposed method is promising to be used effectively in high-dimensional situation.

개인사업자 부도율 예측 모델에서 신용정보 특성 선택 방법 (The Credit Information Feature Selection Method in Default Rate Prediction Model for Individual Businesses)

  • 홍동숙;백한종;신현준
    • 한국시뮬레이션학회논문지
    • /
    • 제30권1호
    • /
    • pp.75-85
    • /
    • 2021
  • 본 논문에서는 개인사업자 부도율을 보다 정확하게 예측하기 위한 새로운 방법으로 개인사업자의 기업 신용 및 개인 신용정보를 가공, 분석하여 입력 특성으로 활용하는 심층 신경망기반 예측 모델을 제시한다. 다양한 분야의 모델링 연구에서 특성 선택 기법은 특히 많은 특성을 포함하는 예측 모델에서 성능 개선을 위한 방법으로 활발히 연구되어 왔다. 본 논문에서는 부도율 예측 모델에 이용된 입력 변수인 거시경제지표(거시변수)와 신용정보(미시변수)에 대한 통계적 검증 이후 추가적으로 신용정보 특성 선택 방법을 통해 예측 성능을 개선하는 특성 집합을 확인할 수 있다. 제안하는 신용정보 특성 선택 방법은 통계적 검증을 수행하는 필터방법과 다수 래퍼를 결합 사용하는 반복적·하이브리드 방법으로, 서브 모델들을 구축하고 최대 성능 모델의 중요 변수를 추출하여 부분집합을 구성 한 후 부분집합과 그 결합셋에 대한 예측 성능 분석을 통해 최종 특성 집합을 결정한다.

Performance Evaluation of a Feature-Importance-based Feature Selection Method for Time Series Prediction

  • Hyun, Ahn
    • Journal of information and communication convergence engineering
    • /
    • 제21권1호
    • /
    • pp.82-89
    • /
    • 2023
  • Various machine-learning models may yield high predictive power for massive time series for time series prediction. However, these models are prone to instability in terms of computational cost because of the high dimensionality of the feature space and nonoptimized hyperparameter settings. Considering the potential risk that model training with a high-dimensional feature set can be time-consuming, we evaluate a feature-importance-based feature selection method to derive a tradeoff between predictive power and computational cost for time series prediction. We used two machine learning techniques for performance evaluation to generate prediction models from a retail sales dataset. First, we ranked the features using impurity- and Local Interpretable Model-agnostic Explanations (LIME) -based feature importance measures in the prediction models. Then, the recursive feature elimination method was applied to eliminate unimportant features sequentially. Consequently, we obtained a subset of features that could lead to reduced model training time while preserving acceptable model performance.

Hybrid Feature Selection Using Genetic Algorithm and Information Theory

  • Cho, Jae Hoon;Lee, Dae-Jong;Park, Jin-Il;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권1호
    • /
    • pp.73-82
    • /
    • 2013
  • In pattern classification, feature selection is an important factor in the performance of classifiers. In particular, when classifying a large number of features or variables, the accuracy and computational time of the classifier can be improved by using the relevant feature subset to remove the irrelevant, redundant, or noisy data. The proposed method consists of two parts: a wrapper part with an improved genetic algorithm(GA) using a new reproduction method and a filter part using mutual information. We also considered feature selection methods based on mutual information(MI) to improve computational complexity. Experimental results show that this method can achieve better performance in pattern recognition problems than other conventional solutions.