• 제목/요약/키워드: boosting classification trees

검색결과 17건 처리시간 0.036초

Split Effect in Ensemble

  • Chung, Dong-Jun;Kim, Hyun-Joong
    • 한국통계학회:학술대회논문집
    • /
    • 한국통계학회 2005년도 추계 학술발표회 논문집
    • /
    • pp.193-197
    • /
    • 2005
  • Classification tree is one of the most suitable base learners for ensemble. For past decade, it was found that bagging gives the most accurate prediction when used with unpruned tree and boosting with stump. Researchers have tried to understand the relationship between the size of trees and the accuracy of ensemble. With experiment, it is found that large trees make boosting overfit the dataset and stumps help avoid it. It means that the accuracy of each classifier needs to be sacrificed for better weighting at each iteration. Hence, split effect in boosting can be explained with the trade-off between the accuracy of each classifier and better weighting on the misclassified points. In bagging, combining larger trees give more accurate prediction because bagging does not have such trade-off, thus it is advisable to make each classifier as accurate as possible.

  • PDF

Tree size determination for classification ensemble

  • Choi, Sung Hoon;Kim, Hyunjoong
    • Journal of the Korean Data and Information Science Society
    • /
    • 제27권1호
    • /
    • pp.255-264
    • /
    • 2016
  • Classification is a predictive modeling for a categorical target variable. Various classification ensemble methods, which predict with better accuracy by combining multiple classifiers, became a powerful machine learning and data mining paradigm. Well-known methodologies of classification ensemble are boosting, bagging and random forest. In this article, we assume that decision trees are used as classifiers in the ensemble. Further, we hypothesized that tree size affects classification accuracy. To study how the tree size in uences accuracy, we performed experiments using twenty-eight data sets. Then we compare the performances of ensemble algorithms; bagging, double-bagging, boosting and random forest, with different tree sizes in the experiment.

계급불균형자료의 분류: 훈련표본 구성방법에 따른 효과 (Classification of Class-Imbalanced Data: Effect of Over-sampling and Under-sampling of Training Data)

  • 김지현;정종빈
    • 응용통계연구
    • /
    • 제17권3호
    • /
    • pp.445-457
    • /
    • 2004
  • 두 계급의 분류문제에서 두 계급의 관측 개체수가 심하게 불균형을 이룬 자료를 분석할 때, 흔히 인위적으로 두 계급의 크기를 비슷하게 해준 다음 분석한다. 본 연구에서는 이런 훈련표본 구성방법의 타당성에 대해 알아보았다. 또한 훈련표본의 구성방법이 부스팅에 미치는 효과에 대해서도 알아보았다. 12개의 실제 자료에 대한 실험 결과 나무모형으로 부스팅 기법을 적용할 때는 훈련표본을 그대로 둔 채 분석하는 것이 좋다는 결론을 얻었다.

A review of tree-based Bayesian methods

  • Linero, Antonio R.
    • Communications for Statistical Applications and Methods
    • /
    • 제24권6호
    • /
    • pp.543-559
    • /
    • 2017
  • Tree-based regression and classification ensembles form a standard part of the data-science toolkit. Many commonly used methods take an algorithmic view, proposing greedy methods for constructing decision trees; examples include the classification and regression trees algorithm, boosted decision trees, and random forests. Recent history has seen a surge of interest in Bayesian techniques for constructing decision tree ensembles, with these methods frequently outperforming their algorithmic counterparts. The goal of this article is to survey the landscape surrounding Bayesian decision tree methods, and to discuss recent modeling and computational developments. We provide connections between Bayesian tree-based methods and existing machine learning techniques, and outline several recent theoretical developments establishing frequentist consistency and rates of convergence for the posterior distribution. The methodology we present is applicable for a wide variety of statistical tasks including regression, classification, modeling of count data, and many others. We illustrate the methodology on both simulated and real datasets.

Effectiveness of Repeated Examination to Diagnose Enterobiasis in Nursery School Groups

  • Remm, Mare;Remm, Kalle
    • Parasites, Hosts and Diseases
    • /
    • 제47권3호
    • /
    • pp.235-241
    • /
    • 2009
  • The aim of this study was to estimate the benefit from repeated examinations in the diagnosis of enterobiasis in nursery school groups, and to test the effectiveness of individual-based risk predictions using different methods. A total of 604 children were examined using double, and 96 using triple, anal swab examinations. The questionnaires for parents, structured observations, and interviews with supervisors were used to identify factors of possible infection risk. In order to model the risk of enterobiasis at individual level, a similarity-based machine learning and prediction software Constud was compared with data mining methods in the Statistica 8 Data Miner software package. Prevalence according to a single examination was 22.5%; the increase as a result of double examinations was 8.2%. Single swabs resulted in an estimated prevalence of 20.1% among children examined 3 times; double swabs increased this by 10.1%, and triple swabs by 7.3%. Random forest classification, boosting classification trees, and Constud correctly predicted about 2/3 of the results of the second examination. Constud estimated a mean prevalence of 31.5% in groups. Constud was able to yield the highest overall fit of individual-based predictions while boosting classification tree and random forest models were more effective in recognizing Enterobius positive persons. As a rule, the actual prevalence of enterobiasis is higher than indicated by a single examination. We suggest using either the values of the mean increase in prevalence after double examinations compared to single examinations or group estimations deduced from individual-level modelled risk predictions.

Cognitive Impairment Prediction Model Using AutoML and Lifelog

  • Hyunchul Choi;Chiho Yoon;Sae Bom Lee
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권11호
    • /
    • pp.53-63
    • /
    • 2023
  • 본 연구는 고령층의 치매 예방을 위한 선별검사 수단으로 자동화된 기계학습(AutoML)을 활용하여 인지기능 장애 예측모형을 개발하였다. 연구 데이터는 한국지능정보사회진흥원의 '치매 고위험군 웨어러블 라이프로그 데이터'를 활용하였다. 분석은 구글 코랩 환경에서 PyCaret 3.0.0이 사용하여 우수한 분류성능을 보여주는 5개의 모형을 선정하고 앙상블 학습을 진행하여 모형을 통합한 뒤, 최종 성능평가를 진행하였다. 연구결과, Voting Classifier, Gradient Boosting Classifier, Extreme Gradient Boosting, Light Gradient Boosting Machine, Extra Trees Classifier, Random Forest Classifier 모형 순으로 높은 예측성능을 보이는 것으로 나타났다. 특히 '수면 중 분당 평균 호흡수'와 '수면 중 분당 평균 심박수'가 가장 중요한 특성변수(feature)로 확인되었다. 본 연구의 결과는 고령층의 인지기능 장애를 보다 효과적으로 관리하고 예방하기 위한 수단으로 기계학습과 라이프로그의 활용 가능성에 대한 고려를 시사한다.

부스팅 인공신경망을 활용한 부실예측모형의 성과개선 (Boosting neural networks with an application to bankruptcy prediction)

  • 김명종;강대기
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2009년도 춘계학술대회
    • /
    • pp.872-875
    • /
    • 2009
  • In a bankruptcy prediction model, the accuracy is one of crucial performance measures due to its significant economic impacts. Ensemble is one of widely used methods for improving the performance of classification and prediction models. Two popular ensemble methods, Bagging and Boosting, have been applied with great success to various machine learning problems using mostly decision trees as base classifiers. In this paper, we analyze the performance of boosted neural networks for improving the performance of traditional neural networks on bankruptcy prediction tasks. Experimental results on Korean firms indicated that the boosted neural networks showed the improved performance over traditional neural networks.

  • PDF

Pruning the Boosting Ensemble of Decision Trees

  • Yoon, Young-Joo;Song, Moon-Sup
    • Communications for Statistical Applications and Methods
    • /
    • 제13권2호
    • /
    • pp.449-466
    • /
    • 2006
  • We propose to use variable selection methods based on penalized regression for pruning decision tree ensembles. Pruning methods based on LASSO and SCAD are compared with the cluster pruning method. Comparative studies are performed on some artificial datasets and real datasets. According to the results of comparative studies, the proposed methods based on penalized regression reduce the size of boosting ensembles without decreasing accuracy significantly and have better performance than the cluster pruning method. In terms of classification noise, the proposed pruning methods can mitigate the weakness of AdaBoost to some degree.

Study on the ensemble methods with kernel ridge regression

  • Kim, Sun-Hwa;Cho, Dae-Hyeon;Seok, Kyung-Ha
    • Journal of the Korean Data and Information Science Society
    • /
    • 제23권2호
    • /
    • pp.375-383
    • /
    • 2012
  • The purpose of the ensemble methods is to increase the accuracy of prediction through combining many classifiers. According to recent studies, it is proved that random forests and forward stagewise regression have good accuracies in classification problems. However they have great prediction error in separation boundary points because they used decision tree as a base learner. In this study, we use the kernel ridge regression instead of the decision trees in random forests and boosting. The usefulness of our proposed ensemble methods was shown by the simulation results of the prostate cancer and the Boston housing data.

부지화 잎의 화학성분에 기반한 질소결핍 여부 구분 머신러닝 모델 개발 (Development of Machine Learning Models Classifying Nitrogen Deficiency Based on Leaf Chemical Properties in Shiranuhi (Citrus unshiu × C. sinensis))

  • 박원표;허성
    • 한국자원식물학회지
    • /
    • 제35권2호
    • /
    • pp.192-200
    • /
    • 2022
  • 본 연구에서는 부지화 잎의 무기양분 농도 측정 결과를 바탕으로 질소를 제외한 다른 무기양분의 함량을 통해서 잎의 질소 결핍 여부를 구분하는 머신러닝 모델을 개발하였다. 그러기 위해서 부지화의 질소결핍구와 대조구의 잎 샘플을 분석한 36개의 데이터를 부트스트랩핑 방법을 통해서 학습용 데이터셋 1,000 여 개로 증량시켰다. 이를 이용해 학습한 각 모델을 테스트한 결과, gradient boosting 모델이 가장 우수한 분류성능을 보여주었다. 본 모델을 이용해 질소함량을 직접적으로 분석할 수 없는 경우, 잎의 무기성분 함량에 기반하여 질소결핍 가능성 여부를 판단해 질소가 부족한 부지화 나무를 분별하고, 정확한 질소함량을 측정하게 유도하여 그에 기초한 적정 질소비료 시비를 가능케 하고자 하였다.