• 제목/요약/키워드: Decision-trees

검색결과 312건 처리시간 0.027초

Threshold를 이용한 의사결정나무의 생성 (Induction of Decision Tress Using the Threshold Concept)

  • 이후석;김재련
    • 산업경영시스템학회지
    • /
    • 제21권45호
    • /
    • pp.57-65
    • /
    • 1998
  • This paper addresses the data classification using the induction of decision trees. A weakness of other techniques of induction of decision trees is that decision trees are too large because they construct decision trees until leaf nodes have a single class. Our study include both overcoming this weakness and constructing decision trees which is small and accurate. First, we construct the decision trees using classification threshold and exception threshold in construction stage. Next, we present two stage pruning method using classification threshold and reduced error pruning in pruning stage. Empirical results show that our method obtain the decision trees which is accurate and small.

  • PDF

특징공간을 사선 분할하는 퍼지 결정트리 유도 (Fuaay Decision Tree Induction to Obliquely Partitioning a Feature Space)

  • 이우향;이건명
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제29권3호
    • /
    • pp.156-166
    • /
    • 2002
  • 결정트리 생성은 특징값들로 기술된 사례들로부터 분류 규칙을 추출하는 유용한 기계학습 방법중 하나이다. 결정트리는 특징공간을 분할하는 형태에 따라 단변수(univariate) 결정트리와 다변수(multivariate) 결정트리로 대별된다. 실제 현장에서 얻어지는 데이터는 관측오류, 불확실성, 주관적인 판단 등의 이유로 특징값 자체에 오류를 포함하는 경우가 많다. 이러한 오류에 대해 강건한 결정트리를 생성하기 위한 방법으로 퍼지 기법을 도입한 결정트리 생성 방법에 대한 연구가 진행되어 왔다. 현재까지 대부분의 퍼지 결정트리에 대한 연구는 단변수 결정트리에 퍼지 기법을 도입한 것들이며, 다변수 결정트리에 퍼지 기법을 적용한 것은 찾아보기 힘들다. 이 논문에서는 다변수 결정트리에 퍼지 기법을 적용하여 퍼지사선형 결정트리라고 하는 퍼지 결정트리를 생성하는 방법을 제안한다. 또한 제안한 결정트리 생성 방법의 특성을 보이기 위한 실험 결과를 보인다.

Knowledge Representation Using Decision Trees Constructed Based on Binary Splits

  • Azad, Mohammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.4007-4024
    • /
    • 2020
  • It is tremendously important to construct decision trees to use as a tool for knowledge representation from a given decision table. However, the usual algorithms may split the decision table based on each value, which is not efficient for numerical attributes. The methodology of this paper is to split the given decision table into binary groups as like the CART algorithm, that uses binary split to work for both categorical and numerical attributes. The difference is that it uses split for each attribute established by the directed acyclic graph in a dynamic programming fashion whereas, the CART uses binary split among all considered attributes in a greedy fashion. The aim of this paper is to study the effect of binary splits in comparison with each value splits when building the decision trees. Such effect can be studied by comparing the number of nodes, local and global misclassification rate among the constructed decision trees based on three proposed algorithms.

순차적으로 선택된 특성과 유전 프로그래밍을 이용한 결정나무 (A Decision Tree Induction using Genetic Programming with Sequentially Selected Features)

  • 김효중;박종선
    • 경영과학
    • /
    • 제23권1호
    • /
    • pp.63-74
    • /
    • 2006
  • Decision tree induction algorithm is one of the most widely used methods in classification problems. However, they could be trapped into a local minimum and have no reasonable means to escape from it if tree algorithm uses top-down search algorithm. Further, if irrelevant or redundant features are included in the data set, tree algorithms produces trees that are less accurate than those from the data set with only relevant features. We propose a hybrid algorithm to generate decision tree that uses genetic programming with sequentially selected features. Correlation-based Feature Selection (CFS) method is adopted to find relevant features which are fed to genetic programming sequentially to find optimal trees at each iteration. The new proposed algorithm produce simpler and more understandable decision trees as compared with other decision trees and it is also effective in producing similar or better trees with relatively smaller set of features in the view of cross-validation accuracy.

의사결정나무를 활용한 신경망 모형의 입력특성 선택: 주택가격 추정 사례 (Decision Tree-Based Feature-Selective Neural Network Model: Case of House Price Estimation)

  • 윤한성
    • 디지털산업정보학회논문지
    • /
    • 제19권1호
    • /
    • pp.109-118
    • /
    • 2023
  • Data-based analysis methods have become used more for estimating or predicting housing prices, and neural network models and decision trees in the field of big data are also widely used more and more. Neural network models are often evaluated to be superior to existing statistical models in terms of estimation or prediction accuracy. However, there is ambiguity in determining the input feature of the input layer of the neural network model, that is, the type and number of input features, and decision trees are sometimes used to overcome these disadvantages. In this paper, we evaluate the existing methods of using decision trees and propose the method of using decision trees to prioritize input feature selection in neural network models. This can be a complementary or combined analysis method of the neural network model and decision tree, and the validity was confirmed by applying the proposed method to house price estimation. Through several comparisons, it has been summarized that the selection of appropriate input characteristics according to priority can increase the estimation power of the model.

A Recursive Partitioning Rule for Binary Decision Trees

  • Kim, Sang-Guin
    • Communications for Statistical Applications and Methods
    • /
    • 제10권2호
    • /
    • pp.471-478
    • /
    • 2003
  • In this paper, we reconsider the Kolmogorov-Smirnoff distance as a split criterion for binary decision trees and suggest an algorithm to obtain the Kolmogorov-Smirnoff distance more efficiently when the input variable have more than three categories. The Kolmogorov-Smirnoff distance is shown to have the property of exclusive preference. Empirical results, comparing the Kolmogorov-Smirnoff distance to the Gini index, show that the Kolmogorov-Smirnoff distance grows more accurate trees in terms of misclassification rate.

Modeling of Environmental Survey by Decision Trees

  • 박희창;조광현
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 한국데이터정보과학회 2004년도 추계학술대회
    • /
    • pp.63-75
    • /
    • 2004
  • The decision tree approach is most useful in classification problems and to divide the search space into rectangular regions. Decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, fraud dection, data reduction and variable screening, category merging, etc. We analyze Gyeongnam social indicator survey data using decision tree techniques for environmental information. We can use these decision tree outputs for environmental preservation and improvement.

  • PDF

Modeling of Environmental Survey by Decision Trees

  • Park, Hee-Chang;Cho, Kwang-Hyun
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권4호
    • /
    • pp.759-771
    • /
    • 2004
  • The decision tree approach is most useful in classification problems and to divide the search space into rectangular regions. Decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, fraud dection, data reduction and variable screening, category merging, etc. We analyze Gyeongnam social indicator survey data using decision tree techniques for environmental information. We can use these decision tree outputs for environmental preservation and improvement.

  • PDF

A review of tree-based Bayesian methods

  • Linero, Antonio R.
    • Communications for Statistical Applications and Methods
    • /
    • 제24권6호
    • /
    • pp.543-559
    • /
    • 2017
  • Tree-based regression and classification ensembles form a standard part of the data-science toolkit. Many commonly used methods take an algorithmic view, proposing greedy methods for constructing decision trees; examples include the classification and regression trees algorithm, boosted decision trees, and random forests. Recent history has seen a surge of interest in Bayesian techniques for constructing decision tree ensembles, with these methods frequently outperforming their algorithmic counterparts. The goal of this article is to survey the landscape surrounding Bayesian decision tree methods, and to discuss recent modeling and computational developments. We provide connections between Bayesian tree-based methods and existing machine learning techniques, and outline several recent theoretical developments establishing frequentist consistency and rates of convergence for the posterior distribution. The methodology we present is applicable for a wide variety of statistical tasks including regression, classification, modeling of count data, and many others. We illustrate the methodology on both simulated and real datasets.

퍼지의사결정을 이용한 RC구조물의 건전성평가 (Integrity Assessment for Reinforced Concrete Structures Using Fuzzy Decision Making)

  • 박철수;손용우;이증빈
    • 한국전산구조공학회:학술대회논문집
    • /
    • 한국전산구조공학회 2002년도 봄 학술발표회 논문집
    • /
    • pp.274-283
    • /
    • 2002
  • This paper presents an efficient models for reinforeced concrete structures using CART-ANFIS(classification and regression tree-adaptive neuro fuzzy inference system). a fuzzy decision tree parttitions the input space of a data set into mutually exclusive regions, each of which is assigned a label, a value, or an action to characterize its data points. Fuzzy decision trees used for classification problems are often called fuzzy classification trees, and each terminal node contains a label that indicates the predicted class of a given feature vector. In the same vein, decision trees used for regression problems are often called fuzzy regression trees, and the terminal node labels may be constants or equations that specify the Predicted output value of a given input vector. Note that CART can select relevant inputs and do tree partitioning of the input space, while ANFIS refines the regression and makes it everywhere continuous and smooth. Thus it can be seen that CART and ANFIS are complementary and their combination constitutes a solid approach to fuzzy modeling.

  • PDF