• 제목/요약/키워드: decision trees

검색결과 303건 처리시간 0.027초

사상체질 임상자료 기반 의사결정나무 생성 알고리즘 비교 (Comparison among Algorithms for Decision Tree based on Sasang Constitutional Clinical Data)

  • 진희정;이수경;이시우
    • 한국한의학연구원논문집
    • /
    • 제17권2호
    • /
    • pp.121-127
    • /
    • 2011
  • Objectives : In the clinical field, it is important to understand the factors that have effects on a certain disease or symptom. For this, many researchers apply Data Mining method to the clinical data that they have collected. One of the efficient methods for Data Mining is decision tree induction. Many researchers have studied to find the best split criteria of decision tree; however, various split criteria coexist. Methods : In this paper, we applied several split criteria(Information Gain, Gini Index, Chi-Square) to Sasang constitutional clinical information and compared each decision tree in order to find optimal split criteria. Results & Conclusion : We found BMI and body measurement factors are important factors to Sasang constitution by analyzing produced decision trees with different split measures. And the decision tree using information gain had the highest accuracy. However, the decision tree that produced highest accuracy is changed depending on given data. So, researcher have to try to find proper split criteria for given data by understanding attribute of the given data.

의사결정나무에서 다중 목표변수를 고려한 (Splitting Decision Tree Nodes with Multiple Target Variables)

  • 김성준
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2003년도 춘계 학술대회 학술발표 논문집
    • /
    • pp.243-246
    • /
    • 2003
  • Data mining is a process of discovering useful patterns for decision making from an amount of data. It has recently received much attention in a wide range of business and engineering fields Classifying a group into subgroups is one of the most important subjects in data mining Tree-based methods, known as decision trees, provide an efficient way to finding classification models. The primary concern in tree learning is to minimize a node impurity, which is evaluated using a target variable in the data set. However, there are situations where multiple target variables should be taken into account, for example, such as manufacturing process monitoring, marketing science, and clinical and health analysis. The purpose of this article is to present several methods for measuring the node impurity, which are applicable to data sets with multiple target variables. For illustrations, numerical examples are given with discussion.

  • PDF

의사결정나무 모델에서의 중요 룰 선택기법 (Rule Selection Method in Decision Tree Models)

  • 손지은;김성범
    • 대한산업공학회지
    • /
    • 제40권4호
    • /
    • pp.375-381
    • /
    • 2014
  • Data mining is a process of discovering useful patterns or information from large amount of data. Decision tree is one of the data mining algorithms that can be used for both classification and prediction and has been widely used for various applications because of its flexibility and interpretability. Decision trees for classification generally generate a number of rules that belong to one of the predefined category and some rules may belong to the same category. In this case, it is necessary to determine the significance of each rule so as to provide the priority of the rule with users. The purpose of this paper is to propose a rule selection method in classification tree models that accommodate the umber of observation, accuracy, and effectiveness in each rule. Our experiments demonstrate that the proposed method produce better performance compared to other existing rule selection methods.

A methodology for Internet Customer segmentation using Decision Trees

  • Cho, Y.B.;Kim, S.H.
    • 한국지능정보시스템학회:학술대회논문집
    • /
    • 한국지능정보시스템학회 2003년도 춘계학술대회
    • /
    • pp.206-213
    • /
    • 2003
  • Application of existing decision tree algorithms for Internet retail customer classification is apt to construct a bushy tree due to imprecise source data. Even excessive analysis may not guarantee the effectiveness of the business although the results are derived from fully detailed segments. Thus, it is necessary to determine the appropriate number of segments with a certain level of abstraction. In this study, we developed a stopping rule that considers the total amount of information gained while generating a rule tree. In addition to forwarding from root to intermediate nodes with a certain level of abstraction, the decision tree is investigated by the backtracking pruning method with misclassification loss information.

  • PDF

Multivariate Decision Tree for High -dimensional Response Vector with Its Application

  • Lee, Seong-Keon
    • Communications for Statistical Applications and Methods
    • /
    • 제11권3호
    • /
    • pp.539-551
    • /
    • 2004
  • Multiple responses are often observed in many application fields, such as customer's time-of-day pattern for using internet. Some decision trees for multiple responses have been constructed by many researchers. However, if the response is a high-dimensional vector that can be thought of as a discretized function, then fitting a multivariate decision tree may be unsuccessful. Yu and Lambert (1999) suggested spline tree and principal component tree to analyze high dimensional response vector by using dimension reduction techniques. In this paper, we shall propose factor tree which would be more interpretable and competitive. Furthermore, using Korean internet company data, we will analyze time-of-day patterns for internet user.

투자와 수출 및 환율의 고용에 대한 의사결정 나무, 랜덤 포레스트와 그래디언트 부스팅 머신러닝 모형 예측 (Investment, Export, and Exchange Rate on Prediction of Employment with Decision Tree, Random Forest, and Gradient Boosting Machine Learning Models)

  • 이재득
    • 무역학회지
    • /
    • 제46권2호
    • /
    • pp.281-299
    • /
    • 2021
  • This paper analyzes the feasibility of using machine learning methods to forecast the employment. The machine learning methods, such as decision tree, artificial neural network, and ensemble models such as random forest and gradient boosting regression tree were used to forecast the employment in Busan regional economy. The following were the main findings of the comparison of their predictive abilities. First, the forecasting power of machine learning methods can predict the employment well. Second, the forecasting values for the employment by decision tree models appeared somewhat differently according to the depth of decision trees. Third, the predictive power of artificial neural network model, however, does not show the high predictive power. Fourth, the ensemble models such as random forest and gradient boosting regression tree model show the higher predictive power. Thus, since the machine learning method can accurately predict the employment, we need to improve the accuracy of forecasting employment with the use of machine learning methods.

CART를 이용한 Tree Model의 성능평가 (Using CART to Evaluate Performance of Tree Model)

  • 정용규;권나연;이영호
    • 서비스연구
    • /
    • 제3권1호
    • /
    • pp.9-16
    • /
    • 2013
  • 데이터 분석가에게 많은 노력이 요구되지 않으면서 사용자가 쉽게 분석결과를 이해할 수 있는 범용 분류기법으로서 가장 대표적인 것은 Breiman이 개발한 의사결정나무를 들 수 있다. 의사결정나무에서 기본이 되는 2가지 핵심내용은 독립변수의 차원 공간을 반복적으로 분할하는 것과 평가용 데이터를 사용하여 가지치기를 하는 것이다. 분류문제에서 반응변수는 범주형 변수여야 한다. 반복적 분할은 변수 의 차원 공간을 겹치지 않는 다차원 직사각형으로 나눈다. 여기서 변수는 연속형, 이진 혹은 서열의 척도이다. 본 논문에서는 새로운 사례를 분류함에 있어서 분류의 성능을 평가하기 위해 분류나무의 정확도 정밀도 재현률 등을 실험하고자 한다.

  • PDF

회귀의사결정나무에서의 관심노드 찾는 분류 기준법 (Interesting Node Finding Criteria for Regression Trees)

  • 이영섭
    • 응용통계연구
    • /
    • 제16권1호
    • /
    • pp.45-53
    • /
    • 2003
  • 의사결정나무 분석 기법 중 하나인 회귀의사결정나무는 연속적인 반응변수를 예측할 때 사용된다. 나무 구조를 형성할 때, 전통적인 분류 기준법은 왼쪽과 오른쪽 자식노드의 불순도를 결합하여 이루어진다. 그러나 본 논문에서 제안하는 새로운 분류 기준법은 관심있는 한쪽만 선택하고 다른 나머지 자식노드는 큰 관심이 없어 무시함으로써 더 이상 결합하여 구하는 것이 아니다. 따라서 나무 구조는 불균형적일 수 있으나 이해하기가 쉽다. 즉, 관심있는 부분집합을 가능한 한 빨리 찾음으로써 단지 몇 개의 조건으로 쉽게 표현할 수 있으며, 정확도는 다소 떨어지지만 설명력은 아주 높다.

의사결정나무를 이용한 토양유기탄소 추정 모델 제작 (Building a Model for Estimate the Soil Organic Carbon Using Decision Tree Algorithm)

  • 유수홍;허준;정재훈;한수희
    • 대한공간정보학회지
    • /
    • 제18권3호
    • /
    • pp.29-35
    • /
    • 2010
  • 토양유기탄소는 산림의 형성에 도움을 주며, 대기 중의 이산화탄소양을 조절함으로써 지구 온난화에 영향을 미치는 중요한 인자 중 하나이다. 토양에 존재하는 인자의 분포를 정확히 파악하려면 모든 지역에 대해 샘플링을 수행 해야하나 이는 매우 비현실적인 방법이다. 따라서 알맞은 모델을 제작하여 토양유기탄소의 분포를 추정할 수 있다면 그 활용도가 높을 것으로 판단된다. 본 연구에서는 의사결정나무 알고리즘을 이용해 경사 데이터, 경사향 데이터, Digital Elevation Model (DEM), 식생의 형태 데이터로부터 토양유기탄소를 상대적으로 다량 함유하고 있는 환경 인자를 파악할 수 있는 모델을 제작했으며, 정확도 검증은 10 집단 교차 검정을 통해 수행하였다. 이를 위하여 See 5와 Weka 소프트웨어를 이용하였다. See5 소프트웨어의 경우, 토양유기탄소 표층에 대해 식생의 형태에 의해 토양유기탄소량이 결정되는 것으로 나타났으며, 중간층에 대해서는 DEM에 의해 토양유기탄소량이 달라진다는 결론이 도출됐다. 생성된 모델의 정확도는 표층에 대해 70.8%, 중간층에 대해 64.7%인 것으로 나타났다. Weka 소프트웨어의 경우, 토양유기탄소 샘플의 표층에 대해 See5와 동일한 결과가 도출되었지만, 중간층에 대해서는 DEM이나 식생의 형태뿐만 아니라 경사향도 영향을 미친다는 결론이 도출되었다. 생성된 모델의 정확도는 표층에 대해 68.98%, 중간층에 대해 60.65%인 것으로 나타났다. 본 연구는 토양유기탄소량의 파악 및 토양유기탄소 지도 제작에 활용될 수 있을 것으로 사료된다.

An Efficient Pedestrian Detection Approach Using a Novel Split Function of Hough Forests

  • Do, Trung Dung;Vu, Thi Ly;Nguyen, Van Huan;Kim, Hakil;Lee, Chongho
    • Journal of Computing Science and Engineering
    • /
    • 제8권4호
    • /
    • pp.207-214
    • /
    • 2014
  • In pedestrian detection applications, one of the most popular frameworks that has received extensive attention in recent years is widely known as a 'Hough forest' (HF). To improve the accuracy of detection, this paper proposes a novel split function to exploit the statistical information of the training set stored in each node during the construction of the forest. The proposed split function makes the trees in the forest more robust to noise and illumination changes. Moreover, the errors of each stage in the training forest are minimized using a global loss function to support trees to track harder training samples. After having the forest trained, the standard HF detector follows up to search for and localize instances in the image. Experimental results showed that the detection performance of the proposed framework was improved significantly with respect to the standard HF and alternating decision forest (ADF) in some public datasets.