• Title/Summary/Keyword: 의사결정나무 분석

Search Result 405, Processing Time 0.023 seconds

Effect of Mothers' Oral Health Knowledge and Behaviour on Dental Caries in Their Preschool Children (데이터마이닝을 이용한 유치치아우식증 관련요인 분석)

  • Kim, Jin-Soo;Kim, Hyo-Jin;Jorn, Hong-Suk
    • Journal of Korean society of Dental Hygiene
    • /
    • v.5 no.2
    • /
    • pp.171-184
    • /
    • 2005
  • In order to investigate correlation between mother's dental ca re for her children and their dental caries, this study was conducted wi th the dental examination record of 365 children who showed the same number of questionnaires with those examined for dental conditions and questionnaires written by mothers among children between three and six years of age and their mothers in Yeoncheon, Gyeonggi province in June 2004 to estimate frequency and percentage of general properties of subjects and mother's oral health care behaviors for her children by research items, to carry out cross-tabulation analysis and correlation analysis following Chi-square distribution for the presence of dental caries in deciduous teeth and oral health care behaviors, and to use decision tree analysis among data mining techniques for those factors associated with the presence of dental caries in deciduous teeth, and drew the following conclusions. 1. For mother's oral health care behaviors and attitudes for her children, 225 mothers(61.6%) confirmed their children's teeth-brushing; 278(76.2%) used no fluorine; and 286(78.6%) observed their children's teeth, 322 mothers(88.2%) instructed their children in teeth-brushing while 268 (73.4%) provided dental care, 232 mothers(63.7%) treated their children's cavity; 290(79.4%) believed that their children had good dental conditions; and 294(80.5%) answered that they began to provide their children with dental care in deciduous teeth. 2. As for the presence of dental caries in deciduous teeth and dental health care behaviors, there were statistically significant differences in employment, confirmation after teeth-brushing, teeth observation, instruction in time for teeth-brushing, use of fluorine, cavity treatment, time for dental care, and perception of dental conditions(p<0.05). 3. As for correlation between dental caries in deciduous teeth and oral health care behaviors, mothers who worked, who believed that their children didn't have good dental condition, and who thought that it was necessary to begin to provide dental care in permanent teeth were found to get their children to suffer from dental caries in deciduous teeth. Besides, those who failed to confirm teeth-brushing, who used no fluorine, and who failed to observe teeth and gave no instruction in time for teeth-brushing were shown to get their children to suffer from dental caries in deciduous teeth. 4. Variables to determine the presence of dental caries in deciduous teeth were classified by cavity treatment, mother's employment, time for dental care, and observation of children's teeth. The first node to determine the presence of dental caries in deciduous teeth was found to be cavity treatment; the next criteria for classification after cavity treatment were shown to be mother's employment and time for dental care. In case of children with no cavity, they were found to be mother's employment and teeth observation.

  • PDF

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

A Study on Foreign Exchange Rate Prediction Based on KTB, IRS and CCS Rates: Empirical Evidence from the Use of Artificial Intelligence (국고채, 금리 스왑 그리고 통화 스왑 가격에 기반한 외환시장 환율예측 연구: 인공지능 활용의 실증적 증거)

  • Lim, Hyun Wook;Jeong, Seung Hwan;Lee, Hee Soo;Oh, Kyong Joo
    • Knowledge Management Research
    • /
    • v.22 no.4
    • /
    • pp.71-85
    • /
    • 2021
  • The purpose of this study is to find out which artificial intelligence methodology is most suitable for creating a foreign exchange rate prediction model using the indicators of bond market and interest rate market. KTBs and MSBs, which are representative products of the Korea bond market, are sold on a large scale when a risk aversion occurs, and in such cases, the USD/KRW exchange rate often rises. When USD liquidity problems occur in the onshore Korean market, the KRW Cross-Currency Swap price in the interest rate market falls, then it plays as a signal to buy USD/KRW in the foreign exchange market. Considering that the price and movement of products traded in the bond market and interest rate market directly or indirectly affect the foreign exchange market, it may be regarded that there is a close and complementary relationship among the three markets. There have been studies that reveal the relationship and correlation between the bond market, interest rate market, and foreign exchange market, but many exchange rate prediction studies in the past have mainly focused on studies based on macroeconomic indicators such as GDP, current account surplus/deficit, and inflation while active research to predict the exchange rate of the foreign exchange market using artificial intelligence based on the bond market and interest rate market indicators has not been conducted yet. This study uses the bond market and interest rate market indicator, runs artificial neural network suitable for nonlinear data analysis, logistic regression suitable for linear data analysis, and decision tree suitable for nonlinear & linear data analysis, and proves that the artificial neural network is the most suitable methodology for predicting the foreign exchange rates which are nonlinear and times series data. Beyond revealing the simple correlation between the bond market, interest rate market, and foreign exchange market, capturing the trading signals between the three markets to reveal the active correlation and prove the mutual organic movement is not only to provide foreign exchange market traders with a new trading model but also to be expected to contribute to increasing the efficiency and the knowledge management of the entire financial market.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Changes in Corporate Governance and Competitiveness in Vietnam: Strategies for the Equitization of Vinacafe (베트남 기업 지배구조의 변화와 경쟁력: 비나카페의 주식회사화 전략)

  • Ji, Hochul;Lee, Sung-Cheol
    • Journal of the Economic Geographical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.415-430
    • /
    • 2015
  • Since the late 1990s Vinacafe has gone through strategic changes in corporate governance and managements due to an increase in the introduction of coffee MNCs, a growth of global demands in sustainable coffee, aging coffee tree, and the deterioration of coffee production with climate changes in Vietnam. Vinacafe has attempted to cope with these kinds of changes through strategies for equitization. Therefore, the main aim of this paper is to identify strategies for enhancing the competitiveness of the Vietnamese coffee industry by investigating changes in corporate governance and processes of coffee production and distribution. The equitization of Vinacafe has led to the enhancement of coffee competitiveness in two perspectives. Firstly, as it has decentralized decision-making from headquarter, subsidiaries have become able to strength their competitiveness themselves by introducing new technologies, improving coffee quality, and encouraging the introduction of eco-friendly production methods through cooperative relationships with stakeholders involved in coffee production and distributions in Vietnam. Secondly, it has also enhanced competitiveness through the diversification and effectiveness of coffee managements by intensifying the flexibility of contract with coffee farmers and diversifying coffee sales and supply chains in Vietnam.

  • PDF

The Characteristics and Survival Rates of Evergreen Broad-Leaved Tree Plantations in Korea (난대상록활엽수종 조림지 활착률과 영향인자)

  • Park, Joon-Hyung;Jung, Su-Young;Lee, Kwang-Soo;Lee, Ho-Sang
    • Journal of Korean Society of Forest Science
    • /
    • v.108 no.4
    • /
    • pp.513-521
    • /
    • 2019
  • With rapid climate change and increasing global warming, the distribution of evergreen broad-leaved trees (EBLTs) is gradually expanding to the inland regions of Korea. The aim of the present study was to analyze the survival rate of 148 EBLT plantations measuring 180 ha and to determine the optimal plantation size that would help in coping with climate change in the warm, temperate climate zone of the Korean peninsula. For enhancing the reliability of our estimated survival model, we selected a set of 11 control variables that may have also influenced the survival rates of the EBLTs in the 148 plantations. The results of partial correlation analysis showed that the survival rate of 67.0±26.9 of the EBLTs in the initial plantation year was primarily correlated with plantation type by the crown closure of the upper story of the forest, wind exposure, and precipitation. For predicting the probability of survival by quantification theory, 148 plots were surveyed and analyzed with 11 environmental site factors. Survival rate was in the order of plantation type by the crown closure of upper story of the forest, wind exposure, total cumulative precipitation for two weeks prior to planting, and slope stiffness in the descending order of score range in the estimated survival model for the EBLTs with the fact that survival rate increased with shade rate of upper story to some extent.

A Literature Review and Classification of Recommender Systems on Academic Journals (추천시스템관련 학술논문 분석 및 분류)

  • Park, Deuk-Hee;Kim, Hyea-Kyeong;Choi, Il-Young;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.139-152
    • /
    • 2011
  • Recommender systems have become an important research field since the emergence of the first paper on collaborative filtering in the mid-1990s. In general, recommender systems are defined as the supporting systems which help users to find information, products, or services (such as books, movies, music, digital products, web sites, and TV programs) by aggregating and analyzing suggestions from other users, which mean reviews from various authorities, and user attributes. However, as academic researches on recommender systems have increased significantly over the last ten years, more researches are required to be applicable in the real world situation. Because research field on recommender systems is still wide and less mature than other research fields. Accordingly, the existing articles on recommender systems need to be reviewed toward the next generation of recommender systems. However, it would be not easy to confine the recommender system researches to specific disciplines, considering the nature of the recommender system researches. So, we reviewed all articles on recommender systems from 37 journals which were published from 2001 to 2010. The 37 journals are selected from top 125 journals of the MIS Journal Rankings. Also, the literature search was based on the descriptors "Recommender system", "Recommendation system", "Personalization system", "Collaborative filtering" and "Contents filtering". The full text of each article was reviewed to eliminate the article that was not actually related to recommender systems. Many of articles were excluded because the articles such as Conference papers, master's and doctoral dissertations, textbook, unpublished working papers, non-English publication papers and news were unfit for our research. We classified articles by year of publication, journals, recommendation fields, and data mining techniques. The recommendation fields and data mining techniques of 187 articles are reviewed and classified into eight recommendation fields (book, document, image, movie, music, shopping, TV program, and others) and eight data mining techniques (association rule, clustering, decision tree, k-nearest neighbor, link analysis, neural network, regression, and other heuristic methods). The results represented in this paper have several significant implications. First, based on previous publication rates, the interest in the recommender system related research will grow significantly in the future. Second, 49 articles are related to movie recommendation whereas image and TV program recommendation are identified in only 6 articles. This result has been caused by the easy use of MovieLens data set. So, it is necessary to prepare data set of other fields. Third, recently social network analysis has been used in the various applications. However studies on recommender systems using social network analysis are deficient. Henceforth, we expect that new recommendation approaches using social network analysis will be developed in the recommender systems. So, it will be an interesting and further research area to evaluate the recommendation system researches using social method analysis. This result provides trend of recommender system researches by examining the published literature, and provides practitioners and researchers with insight and future direction on recommender systems. We hope that this research helps anyone who is interested in recommender systems research to gain insight for future research.

Usefulness of Data Mining in Criminal Investigation (데이터 마이닝의 범죄수사 적용 가능성)

  • Kim, Joon-Woo;Sohn, Joong-Kweon;Lee, Sang-Han
    • Journal of forensic and investigative science
    • /
    • v.1 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • Data mining is an information extraction activity to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results. Typical applications include market segmentation, customer profiling, fraud detection, evaluation of retail promotions, and credit risk analysis. Law enforcement agencies deal with mass data to investigate the crime and its amount is increasing due to the development of processing the data by using computer. Now new challenge to discover knowledge in that data is confronted to us. It can be applied in criminal investigation to find offenders by analysis of complex and relational data structures and free texts using their criminal records or statement texts. This study was aimed to evaluate possibile application of data mining and its limitation in practical criminal investigation. Clustering of the criminal cases will be possible in habitual crimes such as fraud and burglary when using data mining to identify the crime pattern. Neural network modelling, one of tools in data mining, can be applied to differentiating suspect's photograph or handwriting with that of convict or criminal profiling. A case study of in practical insurance fraud showed that data mining was useful in organized crimes such as gang, terrorism and money laundering. But the products of data mining in criminal investigation should be cautious for evaluating because data mining just offer a clue instead of conclusion. The legal regulation is needed to control the abuse of law enforcement agencies and to protect personal privacy or human rights.

  • PDF

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.