• Title/Summary/Keyword: Decision Making Tree Model

Search Result 104, Processing Time 0.025 seconds

Predicting Corporate Bankruptcy using Simulated Annealing-based Random Fores (시뮬레이티드 어니일링 기반의 랜덤 포레스트를 이용한 기업부도예측)

  • Park, Hoyeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.155-170
    • /
    • 2018
  • Predicting a company's financial bankruptcy is traditionally one of the most crucial forecasting problems in business analytics. In previous studies, prediction models have been proposed by applying or combining statistical and machine learning-based techniques. In this paper, we propose a novel intelligent prediction model based on the simulated annealing which is one of the well-known optimization techniques. The simulated annealing is known to have comparable optimization performance to the genetic algorithms. Nevertheless, since there has been little research on the prediction and classification of business decision-making problems using the simulated annealing, it is meaningful to confirm the usefulness of the proposed model in business analytics. In this study, we use the combined model of simulated annealing and machine learning to select the input features of the bankruptcy prediction model. Typical types of combining optimization and machine learning techniques are feature selection, feature weighting, and instance selection. This study proposes a combining model for feature selection, which has been studied the most. In order to confirm the superiority of the proposed model in this study, we apply the real-world financial data of the Korean companies and analyze the results. The results show that the predictive accuracy of the proposed model is better than that of the naïve model. Notably, the performance is significantly improved as compared with the traditional decision tree, random forests, artificial neural network, SVM, and logistic regression analysis.

Development of Prediction Model to Improve Dropout of Cyber University (사이버대학 중도탈락 개선을 위한 예측모형 개발)

  • Park, Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.380-390
    • /
    • 2020
  • Cyber-university has a higher rate of dropout freshmen due to various educational factors, such as social background, economic factors, IT knowledge, and IT utilization ability than students in twenty offline-based university. These students require a different dropout prevention method and improvement method than offline-based universities. This study examined the main factors affecting dropout during the first semester of 2017 and 2018 A Cyber University. This included management and counseling factors by the 'Decision Tree Analysis Model'. The Management and counseling factors were presented as a decision-making method and weekly methods. As a result, a 'Dropout Improvement Model' was implemented and applied to cyber-university freshmen in the first semester of 2019. The dropout-rate in freshmen applying the 'Dropout Improvement Model' decreased by 4.2%, and the learning-persistence rate increased by 11.4%. This study applied a questionnaire survey, and the cyber-university students LMS (Learning Management System) learning results were analyzed objectively. On the other hand, the students' learning results were analyzed quantitatively, but qualitative analysis was not reflected. Nevertheless, further study is necessary. The 'Dropout Improvement Model' of this study will be applied to help improve the dropout rate and learning persistence rate of cyber-university.

The Detection of Online Manipulated Reviews Using Machine Learning and GPT-3 (기계학습과 GPT3를 시용한 조작된 리뷰의 탐지)

  • Chernyaeva, Olga;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.347-364
    • /
    • 2022
  • Fraudulent companies or sellers strategically manipulate reviews to influence customers' purchase decisions; therefore, the reliability of reviews has become crucial for customer decision-making. Since customers increasingly rely on online reviews to search for more detailed information about products or services before purchasing, many researchers focus on detecting manipulated reviews. However, the main problem in detecting manipulated reviews is the difficulties with obtaining data with manipulated reviews to utilize machine learning techniques with sufficient data. Also, the number of manipulated reviews is insufficient compared with the number of non-manipulated reviews, so the class imbalance problem occurs. The class with fewer examples is under-represented and can hamper a model's accuracy, so machine learning methods suffer from the class imbalance problem and solving the class imbalance problem is important to build an accurate model for detecting manipulated reviews. Thus, we propose an OpenAI-based reviews generation model to solve the manipulated reviews imbalance problem, thereby enhancing the accuracy of manipulated reviews detection. In this research, we applied the novel autoregressive language model - GPT-3 to generate reviews based on manipulated reviews. Moreover, we found that applying GPT-3 model for oversampling manipulated reviews can recover a satisfactory portion of performance losses and shows better performance in classification (logit, decision tree, neural networks) than traditional oversampling models such as random oversampling and SMOTE.

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

A Development of Defeat Prediction Model Using Machine Learning in Polyurethane Foaming Process for Automotive Seat (머신러닝을 활용한 자동차 시트용 폴리우레탄 발포공정의 불량 예측 모델 개발)

  • Choi, Nak-Hun;Oh, Jong-Seok;Ahn, Jong-Rok;Kim, Key-Sun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.6
    • /
    • pp.36-42
    • /
    • 2021
  • With recent developments in the Fourth Industrial Revolution, the manufacturing industry has changed rapidly. Through key aspects of Fourth Industrial Revolution super-connections and super-intelligence, machine learning will be able to make fault predictions during the foam-making process. Polyol and isocyanate are components in polyurethane foam. There has been a lot of research that could affect the characteristics of the products, depending on the specific mixture ratio and temperature. Based on these characteristics, this study collects data from each factor during the foam-making process and applies them to machine learning in order to predict faults. The algorithms used in machine learning are the decision tree, kNN, and an ensemble algorithm, and these algorithms learn from 5,147 cases. Based on 1,000 pieces of data for validation, the learning results show up to 98.5% accuracy using the ensemble algorithm. Therefore, the results confirm the faults of currently produced parts by collecting real-time data from each factor during the foam-making process. Furthermore, control of each of the factors may improve the fault rate.

Dynamic quantitative risk assessment of accidents induced by leakage on offshore platforms using DEMATEL-BN

  • Meng, Xiangkun;Chen, Guoming;Zhu, Gaogeng;Zhu, Yuan
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.11 no.1
    • /
    • pp.22-32
    • /
    • 2019
  • On offshore platforms, oil and gas leaks are apt to be the initial events of major accidents that may result in significant loss of life and property damage. To prevent accidents induced by leakage, it is vital to perform a case-specific and accurate risk assessment. This paper presents an integrated method of Ddynamic Qquantitative Rrisk Aassessment (DQRA)-using the Decision Making Trial and Evaluation Laboratory (DEMATEL)-Bayesian Network (BN)-for evaluation of the system vulnerabilities and prediction of the occurrence probabilities of accidents induced by leakage. In the method, three-level indicators are established to identify factors, events, and subsystems that may lead to leakage, fire, and explosion. The critical indicators that directly influence the evolution of risk are identified using DEMATEL. Then, a sequential model is developed to describe the escalation of initial events using an Event Tree (ET), which is converted into a BN to calculate the posterior probabilities of indicators. Using the newly introduced accident precursor data, the failure probabilities of safety barriers and basic factors, and the occurrence probabilities of different consequences can be updated using the BN. The proposed method overcomes the limitations of traditional methods that cannot effectively utilize the operational data of platforms. This work shows trends of accident risks over time and provides useful information for risk control of floating marine platforms.

Development of Diameter Distribution Change and Site Index in a Stand of Robinia pseudoacacia, a Major Honey Plant (꿀샘식물 아까시나무의 지위지수 도출 및 직경분포 변화)

  • Kim, Sora;Song, Jungeun;Park, Chunhee;Min, Suhui;Hong, Sunghee;Yun, Junhyuk;Son, Yeongmo
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.311-318
    • /
    • 2022
  • We conducted this study to derive the site index, which is a criterion for the planting of Robinia pseudoacacia, a honey plant, and to investigate the diameter distribution change by derived site index. We applied the Chapman-Richards equation model to estimate the site index of the Robinia pseudoacacia stand. The site index was distributed within the range of 16-22 when the base age was 30 years. The fitness index of the site index estimation model was low, but we judged that there was no problem in the application because the residual distribution of the equation had not shifted to one side. We used the Weibull diameter distribution function to determine the diameter distribution of the Robinia pseudoacacia stand by site index. We used the mean diameter and the dominant tree height as independent variables to present the diameter distribution, and our analysis procedure was to estimate and recover the parameters of the Weibull diameter distribution function. We used the mean diameter and the dominant tree height of the Robinia pseudoacacia stand to show distribution by diameter class, and the fitness index for dbh distribution estimation was about 80.5%. As a result of schematizing the diameter distribution by site indices as a 30-year-old, we found that the higher the site index, the more the curve of the diameter distribution moved to the right. This suggests that if the plantation were to be established in a high site index stand, considering the suitable trees on the site, the growth of Robinia pseudoacacia woul d become active, and not onl y the production of wood but al so the production of honey would increase. We therefore anticipate that the site index classification table and curve of this Robinia pseudoacacia stand will become the standard for decision making in the plantation and management of this tree.

Taper Equations and Stem Volume Table of Eucalyptus pellita and Acacia mangium Plantations in Indonesia (인도네시아 유칼립투스 및 아카시아 조림지의 수간곡선식 및 수간재적표 조제)

  • Son, Yeong Mo;Kim, Hoon;Lee, Ho Young;Kim, Cheol Min;Kim, Cheol Sang;Kim, Jae Weon;Joo, Rin Won;Lee, Kyeong Hak
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.6
    • /
    • pp.633-638
    • /
    • 2009
  • This study was conducted to develop stem taper equations and stem volume tables for Eucalyptus pellita and Acacia mangium plantations in Kalimantan, Indonesia. To derive a most adequate taper equation for the plantations, three models - Max & Burkhart, Kozak, and Lee models - were applied and their fitness were statistically analyzed by using fitness index, bias, and standard error of bias. The result showed that there is no significant difference between the three models, but the fitness index was slightly higher in the Kozak model. Therefore, the Kozak model was chosen for generating stem taper equations and stem volume tables for the Eucalyptus pellita and Acacia mangium plantations. The resulted stem volume table was compared to the local volume table used in Kalimantan regions, but no significant difference was found in the stem volume estimation. It is expected that the results of this study would provide a good information about the tree growth in abroad plantations and support a reliable decision-making for their management.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.