• 제목/요약/키워드: Local feature selection

검색결과 59건 처리시간 0.028초

New Feature Selection Method for Text Categorization

  • Wang, Xingfeng;Kim, Hee-Cheol
    • Journal of information and communication convergence engineering
    • /
    • 제15권1호
    • /
    • pp.53-61
    • /
    • 2017
  • The preferred feature selection methods for text classification are filter-based. In a common filter-based feature selection scheme, unique scores are assigned to features; then, these features are sorted according to their scores. The last step is to add the top-N features to the feature set. In this paper, we propose an improved global feature selection scheme wherein its last step is modified to obtain a more representative feature set. The proposed method aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in the proposed method to label features according to their discriminative power on classes; these labels are used while producing the feature sets. Experimental results obtained using the well-known 20 Newsgroups and Reuters-21578 datasets with the k-nearest neighbor algorithm and a support vector machine indicate that the proposed method improves the classification performance in terms of a widely known metric ($F_1$).

퍼지 k-Nearest Neighbors 와 Reconstruction Error 기반 Lazy Classifier 설계 (Design of Lazy Classifier based on Fuzzy k-Nearest Neighbors and Reconstruction Error)

  • 노석범;안태천
    • 한국지능시스템학회논문지
    • /
    • 제20권1호
    • /
    • pp.101-108
    • /
    • 2010
  • 본 논문에서는 퍼지 k-NN과 reconstruction error에 기반을 둔 feature selection을 이용한 lazy 분류기 설계를 제안하였다. Reconstruction error는 locally linear reconstruction의 평가 지수이다. 새로운 입력이 주어지면, 퍼지 k-NN은 local 분류기가 유효한 로컬 영역을 정의하고, 로컬 영역 안에 포함된 데이터 패턴에 하중 값을 할당한다. 로컬 영역과 하중 값을 정의한 우에, feature space의 차원을 감소시키기 위하여 feature selection이 수행된다. Reconstruction error 관점에서 우수한 성능을 가진 여러 개의 feature들이 선택 되어 지면, 다항식의 일종인 분류기가 하중 최소자승법에 의해 결정된다. 실험 결과는 기존의 분류기인 standard neural networks, support vector machine, linear discriminant analysis, and C4.5 trees와 비교 결과를 보인다.

Identification of Chinese Event Types Based on Local Feature Selection and Explicit Positive & Negative Feature Combination

  • Tan, Hongye;Zhao, Tiejun;Wang, Haochang;Hong, Wan-Pyo
    • Journal of information and communication convergence engineering
    • /
    • 제5권3호
    • /
    • pp.233-238
    • /
    • 2007
  • An approach to identify Chinese event types is proposed in this paper which combines a good feature selection policy and a Maximum Entropy (ME) model. The approach not only effectively alleviates the problem that classifier performs poorly on the small and difficult types, but improve overall performance. Experiments on the ACE2005 corpus show that performance is satisfying with the 83.5% macro - average F measure. The main characters and ideas of the approach are: (1) Optimal feature set is built for each type according to local feature selection, which fully ensures the performance of each type. (2) Positive and negative features are explicitly discriminated and combined by using one - sided metrics, which makes use of both features' advantages. (3) Wrapper methods are used to search new features and evaluate the various feature subsets to obtain the optimal feature subset.

Microblog User Geolocation by Extracting Local Words Based on Word Clustering and Wrapper Feature Selection

  • Tian, Hechan;Liu, Fenlin;Luo, Xiangyang;Zhang, Fan;Qiao, Yaqiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.3972-3988
    • /
    • 2020
  • Existing methods always rely on statistical features to extract local words for microblog user geolocation. There are many non-local words in extracted words, which makes geolocation accuracy lower. Considering the statistical and semantic features of local words, this paper proposes a microblog user geolocation method by extracting local words based on word clustering and wrapper feature selection. First, ordinary words without positional indications are initially filtered based on statistical features. Second, a word clustering algorithm based on word vectors is proposed. The remaining semantically similar words are clustered together based on the distance of word vectors with semantic meanings. Next, a wrapper feature selection algorithm based on sequential backward subset search is proposed. The cluster subset with the best geolocation effect is selected. Words in selected cluster subset are extracted as local words. Finally, the Naive Bayes classifier is trained based on local words to geolocate the microblog user. The proposed method is validated based on two different types of microblog data - Twitter and Weibo. The results show that the proposed method outperforms existing two typical methods based on statistical features in terms of accuracy, precision, recall, and F1-score.

Feature Selection via Embedded Learning Based on Tangent Space Alignment for Microarray Data

  • Ye, Xiucai;Sakurai, Tetsuya
    • Journal of Computing Science and Engineering
    • /
    • 제11권4호
    • /
    • pp.121-129
    • /
    • 2017
  • Feature selection has been widely established as an efficient technique for microarray data analysis. Feature selection aims to search for the most important feature/gene subset of a given dataset according to its relevance to the current target. Unsupervised feature selection is considered to be challenging due to the lack of label information. In this paper, we propose a novel method for unsupervised feature selection, which incorporates embedded learning and $l_{2,1}-norm$ sparse regression into a framework to select genes in microarray data analysis. Local tangent space alignment is applied during embedded learning to preserve the local data structure. The $l_{2,1}-norm$ sparse regression acts as a constraint to aid in learning the gene weights correlatively, by which the proposed method optimizes for selecting the informative genes which better capture the interesting natural classes of samples. We provide an effective algorithm to solve the optimization problem in our method. Finally, to validate the efficacy of the proposed method, we evaluate the proposed method on real microarray gene expression datasets. The experimental results demonstrate that the proposed method obtains quite promising performance.

얼굴인식을 위한 판별분석에 기반한 복합특징 벡터 구성 방법 (Construction of Composite Feature Vector Based on Discriminant Analysis for Face Recognition)

  • 최상일
    • 한국멀티미디어학회논문지
    • /
    • 제18권7호
    • /
    • pp.834-842
    • /
    • 2015
  • We propose a method to construct composite feature vector based on discriminant analysis for face recognition. For this, we first extract the holistic- and local-features from whole face images and local images, which consist of the discriminant pixels, by using a discriminant feature extraction method. In order to utilize both advantages of holistic- and local-features, we evaluate the amount of the discriminative information in each feature and then construct a composite feature vector with only the features that contain a large amount of discriminative information. The experimental results for the FERET, CMU-PIE and Yale B databases show that the proposed composite feature vector has improvement of face recognition performance.

Performance Evaluation of a Feature-Importance-based Feature Selection Method for Time Series Prediction

  • Hyun, Ahn
    • Journal of information and communication convergence engineering
    • /
    • 제21권1호
    • /
    • pp.82-89
    • /
    • 2023
  • Various machine-learning models may yield high predictive power for massive time series for time series prediction. However, these models are prone to instability in terms of computational cost because of the high dimensionality of the feature space and nonoptimized hyperparameter settings. Considering the potential risk that model training with a high-dimensional feature set can be time-consuming, we evaluate a feature-importance-based feature selection method to derive a tradeoff between predictive power and computational cost for time series prediction. We used two machine learning techniques for performance evaluation to generate prediction models from a retail sales dataset. First, we ranked the features using impurity- and Local Interpretable Model-agnostic Explanations (LIME) -based feature importance measures in the prediction models. Then, the recursive feature elimination method was applied to eliminate unimportant features sequentially. Consequently, we obtained a subset of features that could lead to reduced model training time while preserving acceptable model performance.

특징 선택을 위한 혼합형 유전 알고리즘과 분류 성능 비교 (Hybrid Genetic Algorithms for Feature Selection and Classification Performance Comparisons)

  • 오일석;이진선;문병로
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권8호
    • /
    • pp.1113-1120
    • /
    • 2004
  • 이 논문은 특징 선택을 위한 새로운 혼합형 유전 알고리즘을 제안한다. 탐색을 미세 조정하기 위한 지역 연산을 고안하였고, 이들 연산을 유전 알고리즘에 삽입하였다. 연산의 미세 조정 강도를 조절할 수 있는 매개 변수를 설정하였으며, 이 변수에 따른 효과를 측정하였다. 다양한 표준 데이타 집합에 대해 실험한 결과, 제안한 혼합형 유전 알고리즘이 단순 유전 알고리즘과 순차 탐색 알고리즘에 비해 우수함을 확인하였다.

특징 선택을 위한 문제 공간과 알고리즘 동작 분석 (Analysis of Problem Spaces and Algorithm Behaviors for Feature Selection)

  • 이진선;오일석
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제33권6호
    • /
    • pp.574-579
    • /
    • 2006
  • 특징 선택 알고리즘들은 좋은 해를 찾기 위해 거대한 문제 공간을 폭넓게 효율적으로 탐색하여야 한다. 이 논문에서는 문제 공간의 적합도 지형을 통찰해보고자 하였으며, 알고리즘들의 탐색 능력을 개선하였다. 지역 최고값과 최저값에 대한 통계에 의해 해 공간을 조사한다. 또한 기존 알고리즘들의 동작을 분석하고 이들의 해를 개선하였다.

Combined Features with Global and Local Features for Gas Classification

  • Choi, Sang-Il
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권9호
    • /
    • pp.11-18
    • /
    • 2016
  • In this paper, we propose a gas classification method using combined features for an electronic nose system that performs well even when some loss occurs in measuring data samples. We first divide the entire measurement for a data sample into three local sections, which are the stabilization, exposure, and purge; local features are then extracted from each section. Based on the discrimination analysis, measurements of the discriminative information amounts are taken. Subsequently, the local features that have a large amount of discriminative information are chosen to compose the combined features together with the global features that extracted from the entire measurement section of the data sample. The experimental results show that the combined features by the proposed method gives better classification performance for a variety of volatile organic compound data than the other feature types, especially when there is data loss.