• Title/Summary/Keyword: 의사결정 나무모형

Search Result 228, Processing Time 0.022 seconds

의사결정나무모형을 이용한 교통사고 유형 분석

  • 김유진;최종후;이의용
    • Proceedings of the Korean Statistical Society Conference
    • /
    • 2000.11a
    • /
    • pp.257-260
    • /
    • 2000
  • 본 연구에서는 의사결정나무모형을 이용하여 교통사고 유형 분석을 시도한다. 분석에 이용된 자료는 도로교통안전관리공단에서 수집한 교통사고 정밀조사 자료이다. 본 연구에서 목표변수는 '사고내용'이며, 설명변수는 '인적 요인', '차량적 요인', '도로 환경적 요인' 관련 변수이다. 목표변수에 주요한 기여를 하는 주요 설명변수를 도출하였으며, 얻어진 의사결정나무모형을 토대로 하여 교통사고를 유형화하였다.

  • PDF

An Analysis of Choice Behavior for Tour Type of Commercial Vehicle using Decision Tree (의사결정나무를 이용한 화물자동차 투어유형 선택행태 분석)

  • Kim, Han-Su;Park, Dong-Ju;Kim, Chan-Seong;Choe, Chang-Ho;Kim, Gyeong-Su
    • Journal of Korean Society of Transportation
    • /
    • v.28 no.6
    • /
    • pp.43-54
    • /
    • 2010
  • In recent years there have been studies on tour based approaches for freight travel demand modelling. The purpose of this paper is to analyze tour type choice behavior of commercial vehicles which are divided into round trips and chained tours. The methods of the study are based on the decision tree and the logit model. The results indicates that the explanation variables for classifying tour types of commercial vehicles are loading factor, average goods quantity, and total goods quantity. The results of the decision tree method are similar to those of logit model. In addition, the explanation variables for tour type classification of small trucks are not different from those for medium trucks', implying that the most important factor on the vehicle tour planning is how to load goods such as shipment size and total quantity.

A study on decision tree creation using marginally conditional variables (주변조건부 변수를 이용한 의사결정나무모형 생성에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.2
    • /
    • pp.299-307
    • /
    • 2012
  • Data mining is a method of searching for an interesting relationship among items in a given database. The decision tree is a typical algorithm of data mining. The decision tree is the method that classifies or predicts a group as some subgroups. In general, when researchers create a decision tree model, the generated model can be complicated by the standard of model creation and the number of input variables. In particular, if the decision trees have a large number of input variables in a model, the generated models can be complex and difficult to analyze model. When creating the decision tree model, if there are marginally conditional variables (intervening variables, external variables) in the input variables, it is not directly relevant. In this study, we suggest the method of creating a decision tree using marginally conditional variables and apply to actual data to search for efficiency.

A Study on Exploration of the Recommended Model of Decision Tree to Predict a Hard-to-Measure Mesurement in Anthropometric Survey (인체측정조사에서 측정곤란부위 예측을 위한 의사결정나무 추천 모형 탐지에 관한 연구)

  • Choi, J.H.;Kim, S.K.
    • The Korean Journal of Applied Statistics
    • /
    • v.22 no.5
    • /
    • pp.923-935
    • /
    • 2009
  • This study aims to explore a recommended model of decision tree to predict a hard-to-measure measurement in anthropometric survey. We carry out an experiment on cross validation study to obtain a recommened model of decision tree. We use three split rules of decision tree, those are CHAID, Exhaustive CHAID, and CART. CART result is the best one in real world data.

A study on decision tree creation using intervening variable (매개 변수를 이용한 의사결정나무 생성에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.4
    • /
    • pp.671-678
    • /
    • 2011
  • Data mining searches for interesting relationships among items in a given database. The methods of data mining are decision tree, association rules, clustering, neural network and so on. The decision tree approach is most useful in classification problems and to divide the search space into rectangular regions. Decision tree algorithms are used extensively for data mining in many domains such as retail target marketing, customer classification, etc. When create decision tree model, complicated model by standard of model creation and number of input variable is produced. Specially, there is difficulty in model creation and analysis in case of there are a lot of numbers of input variable. In this study, we study on decision tree using intervening variable. We apply to actuality data to suggest method that remove unnecessary input variable for created model and search the efficiency.

A study on removal of unnecessary input variables using multiple external association rule (다중외적연관성규칙을 이용한 불필요한 입력변수 제거에 관한 연구)

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.22 no.5
    • /
    • pp.877-884
    • /
    • 2011
  • The decision tree is a representative algorithm of data mining and used in many domains such as retail target marketing, fraud detection, data reduction, variable screening, category merging, etc. This method is most useful in classification problems, and to make predictions for a target group after dividing it into several small groups. When we create a model of decision tree with a large number of input variables, we suffer difficulties in exploration and analysis of the model because of complex trees. And we can often find some association exist between input variables by external variables despite of no intrinsic association. In this paper, we study on the removal method of unnecessary input variables using multiple external association rules. And then we apply the removal method to actual data for its efficiencies.

Development of Forecasting Model for the Initial Sale of Apartment Using Data Mining: The Case of Unsold Apartment Complex in Wirye New Town (데이터 마이닝을 이용한 아파트 초기계약 예측모형 개발: 위례 신도시 미분양 아파트 단지를 사례로)

  • Kim, Ji Young;Lee, Sang-Kyeong
    • Journal of Digital Convergence
    • /
    • v.16 no.12
    • /
    • pp.217-229
    • /
    • 2018
  • This paper aims at applying the data mining such as decision tree, neural network, and logistic regression to an unsold apartment complex in Wirye new town and developing the model forecasting the result of initial sale contract by house unit. Raw data are divided into training data and test data. The order of predictability in training data is neural network, decision tree, and logistic regression. On the contrary, the results of test data show that logistic regression is the best model. This means that logistic regression has more data adaptability than neural network which is developed as the model optimized for training data. Determinants of initial sale are the location of floor, direction, the location of unit, the proximity of electricity and generator room, subscriber's residential region and the type of subscription. This suggests that using two models together is more effective in exploring determinants of initial sales. This paper contributes to the development of convergence field by expanding the scope of data mining.

A Determining System for the Category of Need in Long-Term Care Insurance System using Decision Tree Model (의사결정나무기법을 이용한 노인장기요양보험 등급결정모형 개발)

  • Han, Eun-Jeong;Kwak, Min-Jeong;Kan, Im-Oak
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.1
    • /
    • pp.145-159
    • /
    • 2011
  • National long-term care insurance started in July, 2008. We try to make up for weak points and develop a long-term care insurance system. Especially, it is important to upgrade the rating model of the category of need for long-term care continually. We improve the rating model using the data after enforcement of the system to reflect the rapidly changing long-term care marketplace. A decision tree model was adpoted to upgrade the rating model that makes it easy to compare with the current system. This model is based on the first assumption that, a person with worse functional conditions needs more long-term care services than others. Second, the volume of long-term care services are de ned as a service time. This study was conducted to reflect the changing circumstances. Rating models have to be continually improved to reflect changing circumstances, like the infrastructure of the system or the characteristics of the insurance beneficiary.

On the Tree Model grown by esse-sided purity (단측 순수성에 의한 나무모형의 성장에 대하여)

  • 김용대;최대우
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.11a
    • /
    • pp.341-348
    • /
    • 2000
  • 의사결정 나무라고 불리우기도 하는 나무모형은 결과 해석의 용이성으로 데이터마이닝의 분류예측 모형으로써 큰 각광을 받고 있다. 현재 나무모형으로 가장 많이 사용되는 Breiman et. al의 CART나 Quinlan의 C4.5 모두 생성된 노드들의 자료 구성이 목표변수를 기준으로 수준 구성비 측면에서 순수해지도록 진행된다. 그러나 CRM에 있어 가장 흔한 주제인 해지예측을 위한 모델링을 실시하는 경우 관심의 대상인 해지자가 전체 자료에 극히 일부를 차지하여, 기존의 분할 방법에서와 같이 모든 노드의 순수성을 고려하기란 불가능하다. Buja와 Lee는 이와 같이 소수의 관심에 대상이 되는 부류를 찾아내기 위한 나무모형 생성방법을 소개하였다 즉, 해지자 관리가 중요한 경우 해지자와 비해지자 구분을 진행하는 기존의 방법과는 달리 전체 자료 중 해지자를 집중적으로 찾아가는 탐색적 분할 기준인 단측 순수성(one-sided purity)을 제안하였다. 본 연구에서는 단측 순수성에 의한 나무모델링을 모 PC통신 회사의 해지자 자료에 적용하며 기존의 방법과 비교하였고 몇 가지 시뮬레이션 자료를 통해 단측 순수성의 문제점과 앞으로 해결하여야 할 과제에 대하여 살펴보았다.

  • PDF

Study on Detection Technique for Cochlodinium polykrikoides Red tide using Logistic Regression Model and Decision Tree Model (로지스틱 회귀모형과 의사결정나무 모형을 이용한 Cochlodinium polykrikoides 적조 탐지 기법 연구)

  • Bak, Su-Ho;Kim, Heung-Min;Kim, Bum-Kyu;Hwang, Do-Hyun;Unuzaya, Enkhjargal;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.4
    • /
    • pp.777-786
    • /
    • 2018
  • This study propose a new method to detect Cochlodinium polykrikoides on satellite images using logistic regression and decision tree. We used spectral profiles(918) extracted from red tide, clear water and turbid water as training data. The 70% of the entire data set was extracted and used for model training, and the classification accuracy of the model was evaluated by using the remaining 30%. As a result of the accuracy evaluation, the logistic regression model showed about 97% classification accuracy, and the decision tree model showed about 86% classification accuracy.