• Title/Summary/Keyword: Features selection technique

Search Result 116, Processing Time 0.025 seconds

Development of a feature selection technique on users' false beliefs (사용자의 False belief를 이용한 새로운 기능 선택방식에 대한 연구)

  • Lee, Jangsun;Choi, Gyunghyun;Kim, Jieun;Ryu, Hokyoung
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.2
    • /
    • pp.33-40
    • /
    • 2014
  • Selecting appropriate features that products or services should provide for users has been a critical decision making problem for designers. However, the existing feature selection methods have prominent limitations when figuring out how they perceive the features. For example, selecting features based on the users' preference without analyzing users' mental models might lead to the 'feature creep' phenomenon. In this study, we suggest the 'False belief technique' that is able to detect users' mental model for the products/services that are formed after being provided with new features. This technique will be utilized as a way forward to help the designer to determine what features should be included in the new product development.

Gait-Based Gender Classification Using a Correlation-Based Feature Selection Technique

  • Beom Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.55-66
    • /
    • 2024
  • Gender classification techniques have received a lot of attention from researchers because they can be used in various fields such as forensics, surveillance systems, and demographic studies. As previous studies have shown that there are distinctive features between male and female gait, various techniques have been proposed to classify gender from three dimensional(3-D) gait data. However, some of the gait features extracted from 3-D gait data using existing techniques are similar or redundant to each other or do not help in gender classification. In this study, we propose a method to select features that are useful for gender classification using a correlation-based feature selection technique. To demonstrate the effectiveness of the proposed feature selection technique, we compare the performance of gender classification models before and after applying the proposed feature selection technique using a 3-D gait dataset available on the Internet. Eight machine learning algorithms applicable to binary classification problems were utilized in the experiments. The experimental results show that the proposed feature selection technique can reduce the number of features by 22, from 82 to 60, while maintaining the gender classification performance.

A Feature Selection Technique based on Distributional Differences

  • Kim, Sung-Dong
    • Journal of Information Processing Systems
    • /
    • v.2 no.1
    • /
    • pp.23-27
    • /
    • 2006
  • This paper presents a feature selection technique based on distributional differences for efficient machine learning. Initial training data consists of data including many features and a target value. We classified them into positive and negative data based on the target value. We then divided the range of the feature values into 10 intervals and calculated the distribution of the intervals in each positive and negative data. Then, we selected the features and the intervals of the features for which the distributional differences are over a certain threshold. Using the selected intervals and features, we could obtain the reduced training data. In the experiments, we will show that the reduced training data can reduce the training time of the neural network by about 40%, and we can obtain more profit on simulated stock trading using the trained functions as well.

Deep Learning Method for Identification and Selection of Relevant Features

  • Vejendla Lakshman
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.212-216
    • /
    • 2024
  • Feature Selection have turned into the main point of investigations particularly in bioinformatics where there are numerous applications. Deep learning technique is a useful asset to choose features, anyway not all calculations are on an equivalent balance with regards to selection of relevant features. To be sure, numerous techniques have been proposed to select multiple features using deep learning techniques. Because of the deep learning, neural systems have profited a gigantic top recovery in the previous couple of years. Anyway neural systems are blackbox models and not many endeavors have been made so as to examine the fundamental procedure. In this proposed work a new calculations so as to do feature selection with deep learning systems is introduced. To evaluate our outcomes, we create relapse and grouping issues which enable us to think about every calculation on various fronts: exhibitions, calculation time and limitations. The outcomes acquired are truly encouraging since we figure out how to accomplish our objective by outperforming irregular backwoods exhibitions for each situation. The results prove that the proposed method exhibits better performance than the traditional methods.

Sparse and low-rank feature selection for multi-label learning

  • Lim, Hyunki
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.1-7
    • /
    • 2021
  • In this paper, we propose a feature selection technique for multi-label classification. Many existing feature selection techniques have selected features by calculating the relation between features and labels such as a mutual information scale. However, since the mutual information measure requires a joint probability, it is difficult to calculate the joint probability from an actual premise feature set. Therefore, it has the disadvantage that only a few features can be calculated and only local optimization is possible. Away from this regional optimization problem, we propose a feature selection technique that constructs a low-rank space in the entire given feature space and selects features with sparsity. To this end, we designed a regression-based objective function using Nuclear norm, and proposed an algorithm of gradient descent method to solve the optimization problem of this objective function. Based on the results of multi-label classification experiments on four data and three multi-label classification performance, the proposed methodology showed better performance than the existing feature selection technique. In addition, it was showed by experimental results that the performance change is insensitive even to the parameter value change of the proposed objective function.

A Hybrid Feature Selection Method using Univariate Analysis and LVF Algorithm (단변량 분석과 LVF 알고리즘을 결합한 하이브리드 속성선정 방법)

  • Lee, Jae-Sik;Jeong, Mi-Kyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.4
    • /
    • pp.179-200
    • /
    • 2008
  • We develop a feature selection method that can improve both the efficiency and the effectiveness of classification technique. In this research, we employ case-based reasoning as a classification technique. Basically, this research integrates the two existing feature selection methods, i.e., the univariate analysis and the LVF algorithm. First, we sift some predictive features from the whole set of features using the univariate analysis. Then, we generate all possible subsets of features from these predictive features and measure the inconsistency rate of each subset using the LVF algorithm. Finally, the subset having the lowest inconsistency rate is selected as the best subset of features. We measure the performances of our feature selection method using the data obtained from UCI Machine Learning Repository, and compare them with those of existing methods. The number of selected features and the accuracy of our feature selection method are so satisfactory that the improvements both in efficiency and effectiveness are achieved.

  • PDF

Diagnosis of Alzheimer's Disease using Wrapper Feature Selection Method

  • Vyshnavi Ramineni;Goo-Rak Kwon
    • Smart Media Journal
    • /
    • v.12 no.3
    • /
    • pp.30-37
    • /
    • 2023
  • Alzheimer's disease (AD) symptoms are being treated by early diagnosis, where we can only slow the symptoms and research is still undergoing. In consideration, using T1-weighted images several classification models are proposed in Machine learning to identify AD. In this paper, we consider the improvised feature selection, to reduce the complexity by using wrapping techniques and Restricted Boltzmann Machine (RBM). This present work used the subcortical and cortical features of 278 subjects from the ADNI dataset to identify AD and sMRI. Multi-class classification is used for the experiment i.e., AD, EMCI, LMCI, HC. The proposed feature selection consists of Forward feature selection, Backward feature selection, and Combined PCA & RBM. Forward and backward feature selection methods use an iterative method starting being no features in the forward feature selection and backward feature selection with all features included in the technique. PCA is used to reduce the dimensions and RBM is used to select the best feature without interpreting the features. We have compared the three models with PCA to analysis. The following experiment shows that combined PCA &RBM, and backward feature selection give the best accuracy with respective classification model RF i.e., 88.65, 88.56% respectively.

Classification of Epilepsy Using Distance-Based Feature Selection (거리 기반의 특징 선택을 이용한 간질 분류)

  • Lee, Sang-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.8
    • /
    • pp.321-327
    • /
    • 2014
  • Feature selection is the technique to improve the classification performance by using a minimal set by removing features that are not related with each other and characterized by redundancy. This study proposed new feature selection using the distance between the center of gravity of the bounded sum of weighted fuzzy membership functions (BSWFMs) provided by the neural network with weighted fuzzy membership functions (NEWFM) in order to improve the classification performance. The distance-based feature selection selects the minimum features by removing the worst features with the shortest distance between the center of gravity of BSWFMs from the 24 initial features one by one, and then 22 minimum features are selected with the highest performance result. The proposed methodology shows that sensitivity, specificity, and accuracy are 97.7%, 99.7%, and 98.7% with 22 minimum features, respectively.

A progressive study of the sausage mode wave on the pore: the pore-selection technique

  • Cho, Il-Hyun;Kim, Yeon-Han;Cho, Kyung-Suk;Bong, Su-Chan;Park, Young-Deuk
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.1
    • /
    • pp.66.2-66.2
    • /
    • 2013
  • In this study, we present a pore-selection technique to estimate the size of pore. The estimation of the size of pore is important to examine the temporal evolution of size itself and corresponding intensity. The size of pore is typically estimated by applying the intensity threshold technique to the fixed box which contains the entire pore. The typical method has disadvantages in the following circumstances; there are small features near the pore or the image has low spatial resolution. In the former, it is difficult to define a box containing the pore only, excluding the small features near the pore. In the latter, the background and threshold intensity are insignificant due to the insufficient number of pixel in the box. To avoid these difficulties, we use a pore-selection technique which is simply based on the measurement of distances from the pore center. In addition, we will discuss the advantage of the technique for the imaging spectrograph data like the NST FISS.

  • PDF

A comparative study of filter methods based on information entropy

  • Kim, Jung-Tae;Kum, Ho-Yeun;Kim, Jae-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.5
    • /
    • pp.437-446
    • /
    • 2016
  • Feature selection has become an essential technique to reduce the dimensionality of data sets. Many features are frequently irrelevant or redundant for the classification tasks. The purpose of feature selection is to select relevant features and remove irrelevant and redundant features. Applications of the feature selection range from text processing, face recognition, bioinformatics, speaker verification, and medical diagnosis to financial domains. In this study, we focus on filter methods based on information entropy : IG (Information Gain), FCBF (Fast Correlation Based Filter), and mRMR (minimum Redundancy Maximum Relevance). FCBF has the advantage of reducing computational burden by eliminating the redundant features that satisfy the condition of approximate Markov blanket. However, FCBF considers only the relevance between the feature and the class in order to select the best features, thus failing to take into consideration the interaction between features. In this paper, we propose an improved FCBF to overcome this shortcoming. We also perform a comparative study to evaluate the performance of the proposed method.