• Title/Summary/Keyword: Methods selection

Search Result 4,115, Processing Time 0.037 seconds

AutoFe-Sel: A Meta-learning based methodology for Recommending Feature Subset Selection Algorithms

  • Irfan Khan;Xianchao Zhang;Ramesh Kumar Ayyasam;Rahman Ali
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1773-1793
    • /
    • 2023
  • Automated machine learning, often referred to as "AutoML," is the process of automating the time-consuming and iterative procedures that are associated with the building of machine learning models. There have been significant contributions in this area across a number of different stages of accomplishing a data-mining task, including model selection, hyper-parameter optimization, and preprocessing method selection. Among them, preprocessing method selection is a relatively new and fast growing research area. The current work is focused on the recommendation of preprocessing methods, i.e., feature subset selection (FSS) algorithms. One limitation in the existing studies regarding FSS algorithm recommendation is the use of a single learner for meta-modeling, which restricts its capabilities in the metamodeling. Moreover, the meta-modeling in the existing studies is typically based on a single group of data characterization measures (DCMs). Nonetheless, there are a number of complementary DCM groups, and their combination will allow them to leverage their diversity, resulting in improved meta-modeling. This study aims to address these limitations by proposing an architecture for preprocess method selection that uses ensemble learning for meta-modeling, namely AutoFE-Sel. To evaluate the proposed method, we performed an extensive experimental evaluation involving 8 FSS algorithms, 3 groups of DCMs, and 125 datasets. Results show that the proposed method achieves better performance compared to three baseline methods. The proposed architecture can also be easily extended to other preprocessing method selections, e.g., noise-filter selection and imbalance handling method selection.

Improving an Ensemble Model Using Instance Selection Method (사례 선택 기법을 활용한 앙상블 모형의 성능 개선)

  • Min, Sung-Hwan
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.

Arabic Text Clustering Methods and Suggested Solutions for Theme-Based Quran Clustering: Analysis of Literature

  • Bsoul, Qusay;Abdul Salam, Rosalina;Atwan, Jaffar;Jawarneh, Malik
    • Journal of Information Science Theory and Practice
    • /
    • v.9 no.4
    • /
    • pp.15-34
    • /
    • 2021
  • Text clustering is one of the most commonly used methods for detecting themes or types of documents. Text clustering is used in many fields, but its effectiveness is still not sufficient to be used for the understanding of Arabic text, especially with respect to terms extraction, unsupervised feature selection, and clustering algorithms. In most cases, terms extraction focuses on nouns. Clustering simplifies the understanding of an Arabic text like the text of the Quran; it is important not only for Muslims but for all people who want to know more about Islam. This paper discusses the complexity and limitations of Arabic text clustering in the Quran based on their themes. Unsupervised feature selection does not consider the relationships between the selected features. One weakness of clustering algorithms is that the selection of the optimal initial centroid still depends on chances and manual settings. Consequently, this paper reviews literature about the three major stages of Arabic clustering: terms extraction, unsupervised feature selection, and clustering. Six experiments were conducted to demonstrate previously un-discussed problems related to the metrics used for feature selection and clustering. Suggestions to improve clustering of the Quran based on themes are presented and discussed.

The Controlled Selection: Do Algorithms for Optimal Sampling Plan Exist?

  • Kim, Sun-Woong;Ryu, Jae-Bok;Yum, Joon-Keun
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2002.11a
    • /
    • pp.175-178
    • /
    • 2002
  • A number of controlled selection methods, which have some advantages for practical surveys in considering controls beyond stratification, have developed throughout the last half-century. With respect to the optimization of sampling plan, it is obvious that we may use optimal controlled selection in preference to satisfactory controlled selection. However, there are currently certain restrictions on the employment of optimal controlled selection. We present further research to improve an algorithm for optimal controlled selection and to develop standard software.

  • PDF

Comparison of Feature Selection Processes for Image Retrieval Applications

  • Choi, Young-Mee;Choo, Moon-Won
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.12
    • /
    • pp.1544-1548
    • /
    • 2011
  • A process of choosing a subset of original features, so called feature selection, is considered as a crucial preprocessing step to image processing applications. There are already large pools of techniques developed for machine learning and data mining fields. In this paper, basically two methods, non-feature selection and feature selection, are investigated to compare their predictive effectiveness of classification. Color co-occurrence feature is used for defining image features. Standard Sequential Forward Selection algorithm are used for feature selection to identify relevant features and redundancy among relevant features. Four color spaces, RGB, YCbCr, HSV, and Gaussian space are considered for computing color co-occurrence features. Gray-level image feature is also considered for the performance comparison reasons. The experimental results are presented.

Incremental Antenna Selection Based on Lattice-Reduction for Spatial Multiplexing MIMO Systems

  • Kim, Sangchoon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.1
    • /
    • pp.1-14
    • /
    • 2020
  • Antenna selection is a method to enhance the performance of spatial multiplexing multiple-input multiple-output (MIMO) systems, which can achieve the diversity order of the full MIMO systems. Although various selection criteria have been studied in the literature, they should be adjusted to the detection operation implemented at the receiver. In this paper, antenna selection methods that optimize the post-processing signal-to-noise ratio (SNR) and eigenvalue are considered for the lattice reduction (LR)-based receiver. To develop a complexity-efficient antenna selection algorithm, the incremental selection strategy is adopted. Moreover, for improvement of performance, an additional iterative selection method is presented in combination with an incremental strategy.

Logistic Regression Classification by Principal Component Selection

  • Kim, Kiho;Lee, Seokho
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.1
    • /
    • pp.61-68
    • /
    • 2014
  • We propose binary classification methods by modifying logistic regression classification. We use variable selection procedures instead of original variables to select the principal components. We describe the resulting classifiers and discuss their properties. The performance of our proposals are illustrated numerically and compared with other existing classification methods using synthetic and real datasets.

Vertex Selection Scheme for Shape Approximation Based on Dynamic Programming (동적 프로그래밍에 기반한 윤곽선 근사화를 위한 정점 선택 방법)

  • 이시웅;최재각;남재열
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.121-127
    • /
    • 2004
  • This paper presents a new vertex selection scheme for shape approximation. In the proposed method, final vertex points are determined by "two-step procedure". In the first step, initial vertices are simply selected on the contour, which constitute a subset of the original contour, using conventional methods such as an iterated refinement method (IRM) or a progressive vertex selection (PVS) method In the second step, a vertex adjustment Process is incorporated to generate final vertices which are no more confined to the contour and optimal in the view of the given distortion measure. For the optimality of the final vertices, the dynamic programming (DP)-based solution for the adjustment of vertices is proposed. There are two main contributions of this work First, we show that DP can be successfully applied to vertex adjustment. Second, by using DP, the global optimality in the vertex selection can be achieved without iterative processes. Experimental results are presented to show the superiority of our method over the traditional methods.

Bayesian Model Selection for Nonlinear Regression under Noninformative Prior

  • Na, Jonghwa;Kim, Jeongsuk
    • Communications for Statistical Applications and Methods
    • /
    • v.10 no.3
    • /
    • pp.719-729
    • /
    • 2003
  • We propose a Bayesian model selection procedure for nonlinear regression models under noninformative prior. For informative prior, Na and Kim (2002) suggested the Bayesian model selection procedure through MCMC techniques. We extend this method to the case of noninformative prior. The difficulty with the use of noninformative prior is that it is typically improper and hence is defined only up to arbitrary constant. The methods, such as Intrinsic Bayes Factor(IBF) and Fractional Bayes Factor(FBF), are used as a resolution to the problem. We showed the detailed model selection procedure through the specific real data set.

A Unit Selection Methods using Variable Break in a Japanese TTS (일본어 TTS의 가변 Break를 이용한 합성단위 선택 방법)

  • Na, Deok-Su;Bae, Myung-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.983-984
    • /
    • 2008
  • This paper proposes a variable break that can offset prediction error as well as a pre-selection methods, based on the variable break, for enhanced unit selection. In Japanese, a sentence consists of several APs (Accentual phrases) and MPs (Major phrases), and the breaks between these phrases must predicted to realize text-to-speech systems. An MP also consists of several APs and plays a decisive role in making synthetic speech natural and understandable because short pauses appear at its boundary. The variable break is defined as a break that is able to change easily from an AP to an MP boundary, or from an MP to an AP boundary. Using CART (Classification and Regression Trees), the variable break is modeled stochastically, and then we pre-select candidate units in the unit-selection process. As the experimental results show, it was possible to complement a break prediction error and improve the naturalness of synthetic speech.

  • PDF