• 제목/요약/키워드: Feature Selection Methods

검색결과 318건 처리시간 0.025초

AutoFe-Sel: A Meta-learning based methodology for Recommending Feature Subset Selection Algorithms

  • Irfan Khan;Xianchao Zhang;Ramesh Kumar Ayyasam;Rahman Ali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권7호
    • /
    • pp.1773-1793
    • /
    • 2023
  • Automated machine learning, often referred to as "AutoML," is the process of automating the time-consuming and iterative procedures that are associated with the building of machine learning models. There have been significant contributions in this area across a number of different stages of accomplishing a data-mining task, including model selection, hyper-parameter optimization, and preprocessing method selection. Among them, preprocessing method selection is a relatively new and fast growing research area. The current work is focused on the recommendation of preprocessing methods, i.e., feature subset selection (FSS) algorithms. One limitation in the existing studies regarding FSS algorithm recommendation is the use of a single learner for meta-modeling, which restricts its capabilities in the metamodeling. Moreover, the meta-modeling in the existing studies is typically based on a single group of data characterization measures (DCMs). Nonetheless, there are a number of complementary DCM groups, and their combination will allow them to leverage their diversity, resulting in improved meta-modeling. This study aims to address these limitations by proposing an architecture for preprocess method selection that uses ensemble learning for meta-modeling, namely AutoFE-Sel. To evaluate the proposed method, we performed an extensive experimental evaluation involving 8 FSS algorithms, 3 groups of DCMs, and 125 datasets. Results show that the proposed method achieves better performance compared to three baseline methods. The proposed architecture can also be easily extended to other preprocessing method selections, e.g., noise-filter selection and imbalance handling method selection.

퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계 (Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error)

  • 노석범;안태천
    • 한국지능시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.101-108
    • /
    • 2010
  • 본 논문에서는 퍼지 k-NN과 reconstruction error에 기반을 둔 feature selection을 이용한 lazy 분류기 설계를 제안하였다. Reconstruction error는 locally linear reconstruction의 평가 지수이다. 새로운 입력이 주어지면, 퍼지 k-NN은 local 분류기가 유효한 로컬 영역을 정의하고, 로컬 영역 안에 포함된 데이터 패턴에 하중 값을 할당한다. 로컬 영역과 하중 값을 정의한 우에, feature space의 차원을 감소시키기 위하여 feature selection이 수행된다. Reconstruction error 관점에서 우수한 성능을 가진 여러 개의 feature들이 선택 되어 지면, 다항식의 일종인 분류기가 하중 최소자승법에 의해 결정된다. 실험 결과는 기존의 분류기인 standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees와 비교 결과를 보인다.

A Novel Technique for Detection of Repacked Android Application Using Constant Key Point Selection Based Hashing and Limited Binary Pattern Texture Feature Extraction

  • MA Rahim Khan;Manoj Kumar Jain
    • International Journal of Computer Science & Network Security
    • /
    • 제23권9호
    • /
    • pp.141-149
    • /
    • 2023
  • Repacked mobile apps constitute about 78% of all malware of Android, and it greatly affects the technical ecosystem of Android. Although many methods exist for repacked app detection, most of them suffer from performance issues. In this manuscript, a novel method using the Constant Key Point Selection and Limited Binary Pattern (CKPS: LBP) Feature extraction-based Hashing is proposed for the identification of repacked android applications through the visual similarity, which is a notable feature of repacked applications. The results from the experiment prove that the proposed method can effectively detect the apps that are similar visually even that are even under the double fold content manipulations. From the experimental analysis, it proved that the proposed CKPS: LBP method has a better efficiency of detecting 1354 similar applications from a repository of 95124 applications and also the computational time was 0.91 seconds within which a user could get the decision of whether the app repacked. The overall efficiency of the proposed algorithm is 41% greater than the average of other methods, and the time complexity is found to have been reduced by 31%. The collision probability of the Hashes was 41% better than the average value of the other state of the art methods.

강인한 특징 변수 선별과 신경망을 이용한 장면 전환점 검출 기법 (Robust Feature Selection and Shot Change Detection Method Using the Neural Networks)

  • 홍승범;홍교영
    • 한국멀티미디어학회논문지
    • /
    • 제7권7호
    • /
    • pp.877-885
    • /
    • 2004
  • 본 논문은 여러 가지 장면 검출 방식들 중 강인한 특징 변수들의 선별과 신경망을 이용하여 향상된 장면 전환점 검출 기법을 제안한다. 기존의 장면 전환점 검출 방식에서는 인접한 프레임 간에 단일 특징과 고정된 임계값을 주로 사용하였다. 하지만, 비디오 시퀀스 내의 장면 전환점에서는 인접한 프레임 간의 내용(content)인 컬러, 모양, 배경 혹은 질감 등이 동시에 변화한다. 따라서 단일 특징보다는 상호 보완 관계를 갖는 강인한 특징을 이용하여 장면 전환점을 효율적으로 검출한다. 본 논문에서 강인한 특징 변수들을 선택하기 위해, 데이터 마이닝 기법 중 대표적인 CART(classification and regression tree)를 이용하고, 다차원 변수에 따른 임계값을 선정하기 위해 역전파 신경망(backpropagation neural net)을 이용한다. 제안한 방식과 대표적인 특징 추출인 PCA(principal component analysis)기법을 비교하여 특징 변수의 추출 성능을 평가한다. 실험 결과에 따라 제안된 방식이 PCA 기법과 비교하여 우수한 성능이 나타남을 확인한다.

  • PDF

Intelligent System for the Prediction of Heart Diseases Using Machine Learning Algorithms with Anew Mixed Feature Creation (MFC) technique

  • Rawia Elarabi;Abdelrahman Elsharif Karrar;Murtada El-mukashfi El-taher
    • International Journal of Computer Science & Network Security
    • /
    • 제23권5호
    • /
    • pp.148-162
    • /
    • 2023
  • Classification systems can significantly assist the medical sector by allowing for the precise and quick diagnosis of diseases. As a result, both doctors and patients will save time. A possible way for identifying risk variables is to use machine learning algorithms. Non-surgical technologies, such as machine learning, are trustworthy and effective in categorizing healthy and heart-disease patients, and they save time and effort. The goal of this study is to create a medical intelligent decision support system based on machine learning for the diagnosis of heart disease. We have used a mixed feature creation (MFC) technique to generate new features from the UCI Cleveland Cardiology dataset. We select the most suitable features by using Least Absolute Shrinkage and Selection Operator (LASSO), Recursive Feature Elimination with Random Forest feature selection (RFE-RF) and the best features of both LASSO RFE-RF (BLR) techniques. Cross-validated and grid-search methods are used to optimize the parameters of the estimator used in applying these algorithms. and classifier performance assessment metrics including classification accuracy, specificity, sensitivity, precision, and F1-Score, of each classification model, along with execution time and RMSE the results are presented independently for comparison. Our proposed work finds the best potential outcome across all available prediction models and improves the system's performance, allowing physicians to diagnose heart patients more accurately.

속성선택방법과 워드임베딩 및 BOW (Bag-of-Words)를 결합한 오피니언 마이닝 성과에 관한 연구 (Investigating Opinion Mining Performance by Combining Feature Selection Methods with Word Embedding and BOW (Bag-of-Words))

  • 어균선;이건창
    • 디지털융복합연구
    • /
    • 제17권2호
    • /
    • pp.163-170
    • /
    • 2019
  • 과거 10년은 웹의 발달로 인한 데이터가 폭발적으로 생성되었다. 데이터마이닝에서는 대용량의 데이터에서 무의미한 데이터를 구분하고 가치 있는 데이터를 추출하는 단계가 중요한 부분을 차지한다. 본 연구는 감성분석을 위한 재표현 방법과 속성선택 방법을 적용한 오피니언 마이닝 모델을 제안한다. 본 연구에서 사용한 재표현 방법은 백 오즈 워즈(Bag-of-words)와 Word embedding to vector(Word2vec)이다. 속성선택(Feature selection) 방법은 상관관계 기반 속성선택(Correlation based feature selection), 정보획득 속성선택(Information gain)을 사용했다. 본 연구에서 사용한 분류기는 로지스틱 회귀분석(Logistic regression), 인공신경망(Neural network), 나이브 베이지안 네트워크(naive Bayesian network), 랜덤포레스트(Random forest), 랜덤서브스페이스(Random subspace), 스태킹(Stacking)이다. 실증분석 결과, electronics, kitchen 데이터 셋에서는 백 오즈 워즈의 정보획득 속성선택의 로지스틱 회귀분석과 스태킹이 높은 성능을 나타냄을 확인했다. laptop, restaurant 데이터 셋은 Word2vec의 정보획득 속성선택을 적용한 랜덤포레스트가 가장 높은 성능을 나타내는 조합이라는 것을 확인했다. 다음과 같은 결과는 오피니언 마이닝 모델 구축에 있어서 모델의 성능을 향상시킬 수 있음을 나타낸다.

A gradient boosting regression based approach for energy consumption prediction in buildings

  • Bataineh, Ali S. Al
    • Advances in Energy Research
    • /
    • 제6권2호
    • /
    • pp.91-101
    • /
    • 2019
  • This paper proposes an efficient data-driven approach to build models for predicting energy consumption in buildings. Data used in this research is collected by installing humidity and temperature sensors at different locations in a building. In addition to this, weather data from nearby weather station is also included in the dataset to study the impact of weather conditions on energy consumption. One of the main emphasize of this research is to make feature selection independent of domain knowledge. Therefore, to extract useful features from data, two different approaches are tested: one is feature selection through principal component analysis and second is relative importance-based feature selection in original domain. The regression model used in this research is gradient boosting regression and its optimal parameters are chosen through a two staged coarse-fine search approach. In order to evaluate the performance of model, different performance evaluation metrics like r2-score and root mean squared error are used. Results have shown that best performance is achieved, when relative importance-based feature selection is used with gradient boosting regressor. Results of proposed technique has also outperformed the results of support vector machines and neural network-based approaches tested on the same dataset.

Classifying Articles in Chinese Wikipedia with Fine-Grained Named Entity Types

  • Zhou, Jie;Li, Bicheng;Tang, Yongwang
    • Journal of Computing Science and Engineering
    • /
    • 제8권3호
    • /
    • pp.137-148
    • /
    • 2014
  • Named entity classification of Wikipedia articles is a fundamental research area that can be used to automatically build large-scale corpora of named entity recognition or to support other entity processing, such as entity linking, as auxiliary tasks. This paper describes a method of classifying named entities in Chinese Wikipedia with fine-grained types. We considered multi-faceted information in Chinese Wikipedia to construct four feature sets, designed different feature selection methods for each feature, and fused different features with a vector space using different strategies. Experimental results show that the explored feature sets and their combination can effectively improve the performance of named entity classification.

Classification of Cognitive States from fMRI data using Fisher Discriminant Ratio and Regions of Interest

  • Do, Luu Ngoc;Yang, Hyung Jeong
    • International Journal of Contents
    • /
    • 제8권4호
    • /
    • pp.56-63
    • /
    • 2012
  • In recent decades, analyzing the activities of human brain achieved some accomplishments by using the functional Magnetic Resonance Imaging (fMRI) technique. fMRI data provide a sequence of three-dimensional images related to human brain's activity which can be used to detect instantaneous cognitive states by applying machine learning methods. In this paper, we propose a new approach for distinguishing human's cognitive states such as "observing a picture" versus "reading a sentence" and "reading an affirmative sentence" versus "reading a negative sentence". Since fMRI data are high dimensional (about 100,000 features in each sample), extremely sparse and noisy, feature selection is a very important step for increasing classification accuracy and reducing processing time. We used the Fisher Discriminant Ratio to select the most powerful discriminative features from some Regions of Interest (ROIs). The experimental results showed that our approach achieved the best performance compared to other feature extraction methods with the average accuracy approximately 95.83% for the first study and 99.5% for the second study.

SVM 기반 유전 알고리즘을 이용한 컴파일러 분석 프레임워크 : 특징 및 모델 선택 민감성 (Compiler Analysis Framework Using SVM-Based Genetic Algorithm : Feature and Model Selection Sensitivity)

  • 황철훈;신건윤;김동욱;한명묵
    • 정보보호학회논문지
    • /
    • 제30권4호
    • /
    • pp.537-544
    • /
    • 2020
  • 악성코드 기술 발전으로 변이, 난독화 등의 탐지 회피 방법이 고도화되고 있다. 이에 악성코드 탐지 기술에 있어 알려지지 않은 악성코드 탐지 기술이 중요하며, 배포된 악성코드를 통해 저자를 식별하여 알려지지 않은 악성코드를 탐지하는 악성코드 저자 식별 방법이 연구되고 있다. 본 논문에서는 바이너리 기반 저자 식별 방법에 대해 중요 정보인 컴파일러 정보를 추출하고자 하였으며, 연구 간에 특징 선택, 확률 및 비확률 모델, 최적화가 분류 효율성에 미치는 민감성(Sensitive)을 확인하고자 하였다. 실험에서 정보 이득을 통한 특징 선택 방법과 비확률 모델인 서포트 벡터 머신이 높은 효율성을 보였다. 최적화 연구 간에 제안하는 프레임워크를 통한 특징 선택 및 모델 최적화를 통해 높은 분류 정확도를 얻었으며, 최대 48%의 특징 감소 및 51배가량의 빠른 실행 속도라는 결과를 보였다. 본 연구를 통해 특징 선택 및 모델 최적화 방법이 분류 효율성에 미치는 민감성에 대해 확인할 수 있었다.