• 제목/요약/키워드: Tree classifiers

검색결과 79건 처리시간 0.021초

유전자 알고리즘 기반의 불완전 데이터 학습을 위한 속성값계층구조의 생성 (Genetic Algorithm Based Attribute Value Taxonomy Generation for Learning Classifiers with Missing Data)

  • 주진우;양지훈
    • 정보처리학회논문지B
    • /
    • 제13B권2호
    • /
    • pp.133-138
    • /
    • 2006
  • 부부분불완전 데이터(Partially Missing Data) 또는 데이터의 속성 값이 표현되는 정도의 깊이가 서로 다른 데이터를 학습하는데 있어서 속성값계층구조(Attribute Value Taxonomy, AVT)를 기반으로 학습하면 기존의 학습 알고리즘을 통해 얻은 결과보다 정확하고 간결한 분류기를 얻을 수 있다는 사실이 밝혀졌다. 하지만 이러한 속성값계층구조는 처음부터 전문가 또는 데이터 도메인에 대한 지식을 가지고 있는 사람에 의해 만들어져 제공되어야 한다. 이러한 수작업을 통한 속성값계층구조를 생성하기 위해서는 많은 시간이 걸리며 생성과정에서 오류가 발생할 수 있다. 또한 데이터 도메인에 따라서 속성값계층구조를 제공할 전문가가 부재한 경우가 있다. 이러한 배경 아래 본 논문은 유전자 알고리즘을 통해 자동으로 근 최적의 속성값계층구조를 생성하는 알고리즘(GA-AVT-Learner)을 제안한다. 본 논문의 실험은 다양한 실제 데이터를 가지고 GA-AVT-Learner로 생성한 속성값계층구조를 다른 속성값계층구조와 비교하였다. 따라서 GA-AVT-Learner에 의해 생성된 속성값계층구조가 정확하고 간결한 분류기를 제공함을 보이고, 불완전데이터 처리에 있어서도 높은 효율을 보임을 실험적으로 증명하였다.

A Novel Feature Selection Method in the Categorization of Imbalanced Textual Data

  • Pouramini, Jafar;Minaei-Bidgoli, Behrouze;Esmaeili, Mahdi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권8호
    • /
    • pp.3725-3748
    • /
    • 2018
  • Text data distribution is often imbalanced. Imbalanced data is one of the challenges in text classification, as it leads to the loss of performance of classifiers. Many studies have been conducted so far in this regard. The proposed solutions are divided into several general categories, include sampling-based and algorithm-based methods. In recent studies, feature selection has also been considered as one of the solutions for the imbalance problem. In this paper, a novel one-sided feature selection known as probabilistic feature selection (PFS) was presented for imbalanced text classification. The PFS is a probabilistic method that is calculated using feature distribution. Compared to the similar methods, the PFS has more parameters. In order to evaluate the performance of the proposed method, the feature selection methods including Gini, MI, FAST and DFS were implemented. To assess the proposed method, the decision tree classifications such as C4.5 and Naive Bayes were used. The results of tests on Reuters-21875 and WebKB figures per F-measure suggested that the proposed feature selection has significantly improved the performance of the classifiers.

Prediction Model for Gastric Cancer via Class Balancing Techniques

  • Danish, Jamil ;Sellappan, Palaniappan;Sanjoy Kumar, Debnath;Muhammad, Naseem;Susama, Bagchi ;Asiah, Lokman
    • International Journal of Computer Science & Network Security
    • /
    • 제23권1호
    • /
    • pp.53-63
    • /
    • 2023
  • Many researchers are trying hard to minimize the incidence of cancers, mainly Gastric Cancer (GC). For GC, the five-year survival rate is generally 5-25%, but for Early Gastric Cancer (EGC), it is almost 90%. Predicting the onset of stomach cancer based on risk factors will allow for an early diagnosis and more effective treatment. Although there are several models for predicting stomach cancer, most of these models are based on unbalanced datasets, which favours the majority class. However, it is imperative to correctly identify cancer patients who are in the minority class. This research aims to apply three class-balancing approaches to the NHS dataset before developing supervised learning strategies: Oversampling (Synthetic Minority Oversampling Technique or SMOTE), Undersampling (SpreadSubsample), and Hybrid System (SMOTE + SpreadSubsample). This study uses Naive Bayes, Bayesian Network, Random Forest, and Decision Tree (C4.5) methods. We measured these classifiers' efficacy using their Receiver Operating Characteristics (ROC) curves, sensitivity, and specificity. The validation data was used to test several ways of balancing the classifiers. The final prediction model was built on the one that did the best overall.

Hyperparameter Tuning Based Machine Learning classifier for Breast Cancer Prediction

  • Md. Mijanur Rahman;Asikur Rahman Raju;Sumiea Akter Pinky;Swarnali Akter
    • International Journal of Computer Science & Network Security
    • /
    • 제24권2호
    • /
    • pp.196-202
    • /
    • 2024
  • Currently, the second most devastating form of cancer in people, particularly in women, is Breast Cancer (BC). In the healthcare industry, Machine Learning (ML) is commonly employed in fatal disease prediction. Due to breast cancer's favorable prognosis at an early stage, a model is created to utilize the Dataset on Wisconsin Diagnostic Breast Cancer (WDBC). Conversely, this model's overarching axiom is to compare the effectiveness of five well-known ML classifiers, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Naive Bayes (NB) with the conventional method. To counterbalance the effect with conventional methods, the overarching tactic we utilized was hyperparameter tuning utilizing the grid search method, which improved accuracy, secondary precision, third recall, and finally the F1 score. In this study hyperparameter tuning model, the rate of accuracy increased from 94.15% to 98.83% whereas the accuracy of the conventional method increased from 93.56% to 97.08%. According to this investigation, KNN outperformed all other classifiers in terms of accuracy, achieving a score of 98.83%. In conclusion, our study shows that KNN works well with the hyper-tuning method. These analyses show that this study prediction approach is useful in prognosticating women with breast cancer with a viable performance and more accurate findings when compared to the conventional approach.

합성곱 신경망을 이용한 주가방향 예측: 상관관계 속성선택 방법을 중심으로 (Stock Price Direction Prediction Using Convolutional Neural Network: Emphasis on Correlation Feature Selection)

  • 어균선;이건창
    • 경영정보학연구
    • /
    • 제22권4호
    • /
    • pp.21-39
    • /
    • 2020
  • 딥러닝(Deep learning) 기법은 패턴분석, 이미지분류 등 다양한 분야에서 높은 성과를 나타내고 있다. 특히, 주식시장 분석문제는 머신러닝 연구분야에서도 어려운 분야이므로 딥러닝이 많이 활용되는 영역이다. 본 연구에서는 패턴분석과 분류능력이 높은 딥러닝의 일종인 합성곱신경망(Convolutional Neural Network) 모델을 활용하여 주가방향 예측방법을 제안한다. 추가적으로 합성곱신경망 모델을 효율적으로 학습시키기 위한 속성선택(Feature Selection, FS)방법이 적용된다. 합성곱신경망 모델의 성과는 머신러닝 단일 분류기와 앙상블 분류기를 벤치마킹하여 객관적으로 검증된다. 본 연구에서 벤치마킹한 분류기는 로지스틱 회귀분석(Logistic Regression), 의사결정나무(Decision Tree), 인공신경망(Neural Network), 서포트 벡터머신(Support Vector Machine), 아다부스트(Adaboost), 배깅(Bagging), 랜덤포레스트(Random Forest)이다. 실증분석 결과, 속성선택을 적용한 합성곱신경망이 다른 벤치마킹 분류기보다 분류 성능이 상대적으로 높게 나타났다. 이러한 결과는 합성곱신경망 모델과 속성선택방법을 적용한 예측방법이 기업의 재무자료에 내포된 가치를 보다 정교하게 분석할 수 있는 가능성이 있음을 실증적으로 확인할 수 있었다.

패킷 분류를 위한 이차원 이진 프리픽스 트리 (A Two-Dimensional Binary Prefix Tree for Packet Classification)

  • 정여진;김혜란;임혜숙
    • 한국정보과학회논문지:정보통신
    • /
    • 제32권4호
    • /
    • pp.543-550
    • /
    • 2005
  • 인터넷은 그 급속한 성장과 더불어 점차 더 나은 서비스를 제공할 것을 요구받게 되었다. 이에 따라 차세대 인터넷 라우터들에서의 지능적인 패킷 분류 기능은 필수 불가결한 것으로 여겨지고 있다. 패킷 분류란 미리 정의된 classifier에 의거하여 입력된 패킷에 매치하는 가장 순위가 높은 룰을 찾는 과정이다. 기존에 나와있는 많은 패킷 분류 검색 구조들이 출발지, 목적지 프리픽스 필드에 기반하여 룰을 추려내는 접근 방법을 사용하고 있다. 그러나 대부분의 검색 구조들은 출발지, 목적지 프리픽스 검색을 위하여 트라이 구조에 바탕을 둔 순차적인 일차원 검색을 따르고 있으며, 매우 큰 메모리를 요구한다는 단점을 가지고 있다. 본 논문에서는 메모리를 매우 효율적으로 사용하면서도 출발지-목적지 프리픽스 쌍에 기반한 이차원 패킷 분류 구조를 제안하고자 한다. 코드워드로 구성된 이진 프리픽스 트리를 구성함으로써, 출발지 프리픽스 검색과 목적지 프리픽스 검색이 하나의 이진 트리를 통해 동시에 가능하도록 하였다. 또한 본 논문에서 제안하는 구조인 이차원 이진 프리픽스 트리는 트리 구조 내부에 비어있는 노드를 포함하고 있지 않으므로 트라이 구조가 가지고 있는 메모리의 비효율성 문제를 완전히 제거하였다.

A Tree Regularized Classifier-Exploiting Hierarchical Structure Information in Feature Vector for Human Action Recognition

  • Luo, Huiwu;Zhao, Fei;Chen, Shangfeng;Lu, Huanzhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권3호
    • /
    • pp.1614-1632
    • /
    • 2017
  • Bag of visual words is a popular model in human action recognition, but usually suffers from loss of spatial and temporal configuration information of local features, and large quantization error in its feature coding procedure. In this paper, to overcome the two deficiencies, we combine sparse coding with spatio-temporal pyramid for human action recognition, and regard this method as the baseline. More importantly, which is also the focus of this paper, we find that there is a hierarchical structure in feature vector constructed by the baseline method. To exploit the hierarchical structure information for better recognition accuracy, we propose a tree regularized classifier to convey the hierarchical structure information. The main contributions of this paper can be summarized as: first, we introduce a tree regularized classifier to encode the hierarchical structure information in feature vector for human action recognition. Second, we present an optimization algorithm to learn the parameters of the proposed classifier. Third, the performance of the proposed classifier is evaluated on YouTube, Hollywood2, and UCF50 datasets, the experimental results show that the proposed tree regularized classifier obtains better performance than SVM and other popular classifiers, and achieves promising results on the three datasets.

개선된 데이터마이닝을 위한 혼합 학습구조의 제시 (Hybrid Learning Architectures for Advanced Data Mining:An Application to Binary Classification for Fraud Management)

  • Kim, Steven H.;Shin, Sung-Woo
    • 정보기술응용연구
    • /
    • 제1권
    • /
    • pp.173-211
    • /
    • 1999
  • The task of classification permeates all walks of life, from business and economics to science and public policy. In this context, nonlinear techniques from artificial intelligence have often proven to be more effective than the methods of classical statistics. The objective of knowledge discovery and data mining is to support decision making through the effective use of information. The automated approach to knowledge discovery is especially useful when dealing with large data sets or complex relationships. For many applications, automated software may find subtle patterns which escape the notice of manual analysis, or whose complexity exceeds the cognitive capabilities of humans. This paper explores the utility of a collaborative learning approach involving integrated models in the preprocessing and postprocessing stages. For instance, a genetic algorithm effects feature-weight optimization in a preprocessing module. Moreover, an inductive tree, artificial neural network (ANN), and k-nearest neighbor (kNN) techniques serve as postprocessing modules. More specifically, the postprocessors act as second0order classifiers which determine the best first-order classifier on a case-by-case basis. In addition to the second-order models, a voting scheme is investigated as a simple, but efficient, postprocessing model. The first-order models consist of statistical and machine learning models such as logistic regression (logit), multivariate discriminant analysis (MDA), ANN, and kNN. The genetic algorithm, inductive decision tree, and voting scheme act as kernel modules for collaborative learning. These ideas are explored against the background of a practical application relating to financial fraud management which exemplifies a binary classification problem.

  • PDF

Learning to Prevent Inactive Student of Indonesia Open University

  • Tama, Bayu Adhi
    • Journal of Information Processing Systems
    • /
    • 제11권2호
    • /
    • pp.165-172
    • /
    • 2015
  • The inactive student rate is becoming a major problem in most open universities worldwide. In Indonesia, roughly 36% of students were found to be inactive, in 2005. Data mining had been successfully employed to solve problems in many domains, such as for educational purposes. We are proposing a method for preventing inactive students by mining knowledge from student record systems with several state of the art ensemble methods, such as Bagging, AdaBoost, Random Subspace, Random Forest, and Rotation Forest. The most influential attributes, as well as demographic attributes (marital status and employment), were successfully obtained which were affecting student of being inactive. The complexity and accuracy of classification techniques were also compared and the experimental results show that Rotation Forest, with decision tree as the base-classifier, denotes the best performance compared to other classifiers.

Generation of Pattern Classifiers Based on Linear Nongroup CA

  • Choi, Un-Sook;Cho, Sung-Jin;Kim, Han-Doo
    • 한국멀티미디어학회논문지
    • /
    • 제18권11호
    • /
    • pp.1281-1288
    • /
    • 2015
  • Nongroup Cellular Automata(CA) having two trees in the state transition diagram of a CA is suitable for pattern classifier which divides pattern set into two classes. Maji et al. [1] classified patterns by using multiple attractor cellular automata as a pattern classifier with dependency vector. In this paper we propose a method of generation of a pattern classifier using feature vector which is the extension of dependency vector. In addition, we propose methods for finding nonreachable states in the 0-tree of the state transition diagram of TPMACA corresponding to the given feature vector for the analysis of the state transition behavior of the generated pattern classifier.