• 제목/요약/키워드: Interpretability of trees.

검색결과 3건 처리시간 0.014초

대표적인 의사결정나무 알고리즘의 해석력 비교 (Interpretability Comparison of Popular Decision Tree Algorithms)

  • 홍정식;황근성
    • 산업경영시스템학회지
    • /
    • 제44권2호
    • /
    • pp.15-23
    • /
    • 2021
  • Most of the open-source decision tree algorithms are based on three splitting criteria (Entropy, Gini Index, and Gain Ratio). Therefore, the advantages and disadvantages of these three popular algorithms need to be studied more thoroughly. Comparisons of the three algorithms were mainly performed with respect to the predictive performance. In this work, we conducted a comparative experiment on the splitting criteria of three decision trees, focusing on their interpretability. Depth, homogeneity, coverage, lift, and stability were used as indicators for measuring interpretability. To measure the stability of decision trees, we present a measure of the stability of the root node and the stability of the dominating rules based on a measure of the similarity of trees. Based on 10 data collected from UCI and Kaggle, we compare the interpretability of DT (Decision Tree) algorithms based on three splitting criteria. The results show that the GR (Gain Ratio) branch-based DT algorithm performs well in terms of lift and homogeneity, while the GINI (Gini Index) and ENT (Entropy) branch-based DT algorithms performs well in terms of coverage. With respect to stability, considering both the similarity of the dominating rule or the similarity of the root node, the DT algorithm according to the ENT splitting criterion shows the best results.

회귀의사결정나무에서의 관심노드 찾는 분류 기준법 (Interesting Node Finding Criteria for Regression Trees)

  • 이영섭
    • 응용통계연구
    • /
    • 제16권1호
    • /
    • pp.45-53
    • /
    • 2003
  • 의사결정나무 분석 기법 중 하나인 회귀의사결정나무는 연속적인 반응변수를 예측할 때 사용된다. 나무 구조를 형성할 때, 전통적인 분류 기준법은 왼쪽과 오른쪽 자식노드의 불순도를 결합하여 이루어진다. 그러나 본 논문에서 제안하는 새로운 분류 기준법은 관심있는 한쪽만 선택하고 다른 나머지 자식노드는 큰 관심이 없어 무시함으로써 더 이상 결합하여 구하는 것이 아니다. 따라서 나무 구조는 불균형적일 수 있으나 이해하기가 쉽다. 즉, 관심있는 부분집합을 가능한 한 빨리 찾음으로써 단지 몇 개의 조건으로 쉽게 표현할 수 있으며, 정확도는 다소 떨어지지만 설명력은 아주 높다.

의사결정나무 모델에서의 중요 룰 선택기법 (Rule Selection Method in Decision Tree Models)

  • 손지은;김성범
    • 대한산업공학회지
    • /
    • 제40권4호
    • /
    • pp.375-381
    • /
    • 2014
  • Data mining is a process of discovering useful patterns or information from large amount of data. Decision tree is one of the data mining algorithms that can be used for both classification and prediction and has been widely used for various applications because of its flexibility and interpretability. Decision trees for classification generally generate a number of rules that belong to one of the predefined category and some rules may belong to the same category. In this case, it is necessary to determine the significance of each rule so as to provide the priority of the rule with users. The purpose of this paper is to propose a rule selection method in classification tree models that accommodate the umber of observation, accuracy, and effectiveness in each rule. Our experiments demonstrate that the proposed method produce better performance compared to other existing rule selection methods.