• Title/Summary/Keyword: Random Subspace Method

Search Result 24, Processing Time 0.025 seconds

Optimization of Random Subspace Ensemble for Bankruptcy Prediction (재무부실화 예측을 위한 랜덤 서브스페이스 앙상블 모형의 최적화)

  • Min, Sung-Hwan
    • Journal of Information Technology Services
    • /
    • v.14 no.4
    • /
    • pp.121-135
    • /
    • 2015
  • Ensemble classification is to utilize multiple classifiers instead of using a single classifier. Recently ensemble classifiers have attracted much attention in data mining community. Ensemble learning techniques has been proved to be very useful for improving the prediction accuracy. Bagging, boosting and random subspace are the most popular ensemble methods. In random subspace, each base classifier is trained on a randomly chosen feature subspace of the original feature space. The outputs of different base classifiers are aggregated together usually by a simple majority vote. In this study, we applied the random subspace method to the bankruptcy problem. Moreover, we proposed a method for optimizing the random subspace ensemble. The genetic algorithm was used to optimize classifier subset of random subspace ensemble for bankruptcy prediction. This paper applied the proposed genetic algorithm based random subspace ensemble model to the bankruptcy prediction problem using a real data set and compared it with other models. Experimental results showed the proposed model outperformed the other models.

Improving an Ensemble Model Using Instance Selection Method (사례 선택 기법을 활용한 앙상블 모형의 성능 개선)

  • Min, Sung-Hwan
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.

Genetic Algorithm based Hybrid Ensemble Model (유전자 알고리즘 기반 통합 앙상블 모형)

  • Min, Sung-Hwan
    • Journal of Information Technology Applications and Management
    • /
    • v.23 no.1
    • /
    • pp.45-59
    • /
    • 2016
  • An ensemble classifier is a method that combines output of multiple classifiers. It has been widely accepted that ensemble classifiers can improve the prediction accuracy. Recently, ensemble techniques have been successfully applied to the bankruptcy prediction. Bagging and random subspace are the most popular ensemble techniques. Bagging and random subspace have proved to be very effective in improving the generalization ability respectively. However, there are few studies which have focused on the integration of bagging and random subspace. In this study, we proposed a new hybrid ensemble model to integrate bagging and random subspace method using genetic algorithm for improving the performance of the model. The proposed model is applied to the bankruptcy prediction for Korean companies and compared with other models in this study. The experimental results showed that the proposed model performs better than the other models such as the single classifier, the original ensemble model and the simple hybrid model.

Developing an Ensemble Classifier for Bankruptcy Prediction (부도 예측을 위한 앙상블 분류기 개발)

  • Min, Sung-Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.7
    • /
    • pp.139-148
    • /
    • 2012
  • An ensemble of classifiers is to employ a set of individually trained classifiers and combine their predictions. It has been found that in most cases the ensembles produce more accurate predictions than the base classifiers. Combining outputs from multiple classifiers, known as ensemble learning, is one of the standard and most important techniques for improving classification accuracy in machine learning. An ensemble of classifiers is efficient only if the individual classifiers make decisions as diverse as possible. Bagging is the most popular method of ensemble learning to generate a diverse set of classifiers. Diversity in bagging is obtained by using different training sets. The different training data subsets are randomly drawn with replacement from the entire training dataset. The random subspace method is an ensemble construction technique using different attribute subsets. In the random subspace, the training dataset is also modified as in bagging. However, this modification is performed in the feature space. Bagging and random subspace are quite well known and popular ensemble algorithms. However, few studies have dealt with the integration of bagging and random subspace using SVM Classifiers, though there is a great potential for useful applications in this area. The focus of this paper is to propose methods for improving SVM performance using hybrid ensemble strategy for bankruptcy prediction. This paper applies the proposed ensemble model to the bankruptcy prediction problem using a real data set from Korean companies.

Camera Source Identification of Digital Images Based on Sample Selection

  • Wang, Zhihui;Wang, Hong;Li, Haojie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3268-3283
    • /
    • 2018
  • With the advent of the Information Age, the source identification of digital images, as a part of digital image forensics, has attracted increasing attention. Therefore, an effective technique to identify the source of digital images is urgently needed at this stage. In this paper, first, we study and implement some previous work on image source identification based on sensor pattern noise, such as the Lukas method, principal component analysis method and the random subspace method. Second, to extract a purer sensor pattern noise, we propose a sample selection method to improve the random subspace method. By analyzing the image texture feature, we select a patch with less complexity to extract more reliable sensor pattern noise, which improves the accuracy of identification. Finally, experiment results reveal that the proposed sample selection method can extract a purer sensor pattern noise, which further improves the accuracy of image source identification. At the same time, this approach is less complicated than the deep learning models and is close to the most advanced performance.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Classification for Imbalanced Breast Cancer Dataset Using Resampling Methods

  • Hana Babiker, Nassar
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.89-95
    • /
    • 2023
  • Analyzing breast cancer patient files is becoming an exciting area of medical information analysis, especially with the increasing number of patient files. In this paper, breast cancer data is collected from Khartoum state hospital, and the dataset is classified into recurrence and no recurrence. The data is imbalanced, meaning that one of the two classes have more sample than the other. Many pre-processing techniques are applied to classify this imbalanced data, resampling, attribute selection, and handling missing values, and then different classifiers models are built. In the first experiment, five classifiers (ANN, REP TREE, SVM, and J48) are used, and in the second experiment, meta-learning algorithms (Bagging, Boosting, and Random subspace). Finally, the ensemble model is used. The best result was obtained from the ensemble model (Boosting with J48) with the highest accuracy 95.2797% among all the algorithms, followed by Bagging with J48(90.559%) and random subspace with J48(84.2657%). The breast cancer imbalanced dataset was classified into recurrence, and no recurrence with different classified algorithms and the best result was obtained from the ensemble model.

Eigenspace-Based Adaptive Array Robust to Steering Errors By Effective Interference Subspace Estimation (효과적인 간섭 부공간 추정을 통한 조향에러에 강인한 고유공간 기반 적응 어레이)

  • Choi, Yang-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.269-277
    • /
    • 2012
  • When there are mismatches between the beamforming steering vector and the array response vector for the desired signal, the performance can be severely degraded as the adaptive array attempts to suppress the desired signal as well as interferences. In this paper, an robust method is proposed for the adaptive array in the presence of both direction errors and random errors in the steering vector. The proposed method first finds a signal-plus-interference subspace (SIS) from the correlation matrix, which in turn is exploited to extract an interference subspace based on the structure of a uniform linear array (ULA), the effect of the desired signal direction vector being reduced as much as possible. Then, the weight vector is attained to be orthogonal to the interference subspace. Simulation shows that the proposed method, in terms of signal-to-interference plus noise ratio (SINR), outperforms existing ones such as the doubly constrained robust Capon beamformer (DCRCB).

Determination of flutter derivatives by stochastic subspace identification technique

  • Qin, Xian-Rong;Gu, Ming
    • Wind and Structures
    • /
    • v.7 no.3
    • /
    • pp.173-186
    • /
    • 2004
  • Flutter derivatives provide the basis of predicting the critical wind speed in flutter and buffeting analysis of long-span cable-supported bridges. In this paper, one popular stochastic system identification technique, covariance-driven Stochastic Subspace Identification(SSI in short), is firstly presented for estimation of the flutter derivatives of bridge decks from their random responses in turbulent flow. Secondly, wind tunnel tests of a streamlined thin plate model and a ${\Pi}$ type blunt bridge section model are conducted in turbulent flow and the flutter derivatives are determined by SSI. The flutter derivatives of the thin plate model identified by SSI are very comparable to those identified by the unifying least-square method and Theodorson's theoretical values. As to the ${\Pi}$ type section model, the effect of turbulence on aerodynamic damping seems to be somewhat notable, therefore perhaps the wind tunnel tests for flutter derivative estimation of those models with similar blunt sections should be conducted in turbulent flow.

Nonnegative estimates of variance components in a two-way random model

  • Choi, Jaesung
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.4
    • /
    • pp.337-346
    • /
    • 2019
  • This paper discusses a method for obtaining nonnegative estimates for variance components in a random effects model. A variance component should be positive by definition. Nevertheless, estimates of variance components are sometimes given as negative values, which is not desirable. The proposed method is based on two basic ideas. One is the identification of the orthogonal vector subspaces according to factors and the other is to ascertain the projection in each orthogonal vector subspace. Hence, an observation vector can be denoted by the sum of projections. The method suggested here always produces nonnegative estimates using projections. Hartley's synthesis is used for the calculation of expected values of quadratic forms. It also discusses how to set up a residual model for each projection.