• Title/Summary/Keyword: 10-fold cross-validation

Search Result 213, Processing Time 0.022 seconds

A Study for Building Credit Scoring Model using Enterprise Human Resource Factors (기업 인적자원 관련 변수를 이용한 기업 신용점수 모형 구축에 관한 연구)

  • Lee, Yung-Seop;Park, Joo-Wan
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.3
    • /
    • pp.423-440
    • /
    • 2007
  • Although various models have been developed to establish the enterprise credit scoring, no model has utilized the enterprise human resource so far. The purpose of this study was to build an enterprise credit scoring model using enterprise human resource factors. The data to measure the enterprise credit score were made by the first-year research material of HCCP was used to investigate the enterprise human resource and 2004 Credit Rating Score generated from KIS-Credit Scoring Model. The independent variables were chosen among questionnaires of HCCP based on Mclagan(1989)'s HR wheel model, and the credit score of Korean Information Service was used for the dependent variables. The statistical method used for data analysis was logistic regression. As a result of constructing a model, 22 variables were selected. To see these specifically by each large area, 6 variables in human resource development(HRD) area, 15 in human resource management(HRM) area, and 1 in the other area were chosen. As a consequence of 10 fold cross validation, misclassification rate and G-mean were 30.81 and 68.27 respectively. Decile having the highest response rate was bigger than the one having the lowest response rate by 6.08 times, and had a tendency to decrease. Therefore, the result of study showed that the proposed model was appropriate to measure enterprise credit score using enterprise human resource variables.

A Comparative Analysis of Ensemble Learning-Based Classification Models for Explainable Term Deposit Subscription Forecasting (설명 가능한 정기예금 가입 여부 예측을 위한 앙상블 학습 기반 분류 모델들의 비교 분석)

  • Shin, Zian;Moon, Jihoon;Rho, Seungmin
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.3
    • /
    • pp.97-117
    • /
    • 2021
  • Predicting term deposit subscriptions is one of representative financial marketing in banks, and banks can build a prediction model using various customer information. In order to improve the classification accuracy for term deposit subscriptions, many studies have been conducted based on machine learning techniques. However, even if these models can achieve satisfactory performance, utilizing them is not an easy task in the industry when their decision-making process is not adequately explained. To address this issue, this paper proposes an explainable scheme for term deposit subscription forecasting. For this, we first construct several classification models using decision tree-based ensemble learning methods, which yield excellent performance in tabular data, such as random forest, gradient boosting machine (GBM), extreme gradient boosting (XGB), and light gradient boosting machine (LightGBM). We then analyze their classification performance in depth through 10-fold cross-validation. After that, we provide the rationale for interpreting the influence of customer information and the decision-making process by applying Shapley additive explanation (SHAP), an explainable artificial intelligence technique, to the best classification model. To verify the practicality and validity of our scheme, experiments were conducted with the bank marketing dataset provided by Kaggle; we applied the SHAP to the GBM and LightGBM models, respectively, according to different dataset configurations and then performed their analysis and visualization for explainable term deposit subscriptions.

U-Net Cloud Detection for the SPARCS Cloud Dataset from Landsat 8 Images (Landsat 8 기반 SPARCS 데이터셋을 이용한 U-Net 구름탐지)

  • Kang, Jonggu;Kim, Geunah;Jeong, Yemin;Kim, Seoyeon;Youn, Youjeong;Cho, Soobin;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1149-1161
    • /
    • 2021
  • With a trend of the utilization of computer vision for satellite images, cloud detection using deep learning also attracts attention recently. In this study, we conducted a U-Net cloud detection modeling using SPARCS (Spatial Procedures for Automated Removal of Cloud and Shadow) Cloud Dataset with the image data augmentation and carried out 10-fold cross-validation for an objective assessment of the model. Asthe result of the blind test for 1800 datasets with 512 by 512 pixels, relatively high performance with the accuracy of 0.821, the precision of 0.847, the recall of 0.821, the F1-score of 0.831, and the IoU (Intersection over Union) of 0.723. Although 14.5% of actual cloud shadows were misclassified as land, and 19.7% of actual clouds were misidentified as land, this can be overcome by increasing the quality and quantity of label datasets. Moreover, a state-of-the-art DeepLab V3+ model and the NAS (Neural Architecture Search) optimization technique can help the cloud detection for CAS500 (Compact Advanced Satellite 500) in South Korea.

A TBM data-based ground prediction using deep neural network (심층 신경망을 이용한 TBM 데이터 기반의 굴착 지반 예측 연구)

  • Kim, Tae-Hwan;Kwak, No-Sang;Kim, Taek Kon;Jung, Sabum;Ko, Tae Young
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.1
    • /
    • pp.13-24
    • /
    • 2021
  • Tunnel boring machine (TBM) is widely used for tunnel excavation in hard rock and soft ground. In the perspective of TBM-based tunneling, one of the main challenges is to drive the machine optimally according to varying geological conditions, which could significantly lead to saving highly expensive costs by reducing the total operation time. Generally, drilling investigations are conducted to survey the geological ground before the TBM tunneling. However, it is difficult to provide the precise ground information over the whole tunnel path to operators because it acquires insufficient samples around the path sparsely and irregularly. To overcome this issue, in this study, we proposed a geological type classification system using the TBM operating data recorded in a 5 s sampling rate. We first categorized the various geological conditions (here, we limit to granite) as three geological types (i.e., rock, soil, and mixed type). Then, we applied the preprocessing methods including outlier rejection, normalization, and extracting input features, etc. We adopted a deep neural network (DNN), which has 6 hidden layers, to classify the geological types based on TBM operating data. We evaluated the classification system using the 10-fold cross-validation. Average classification accuracy presents the 75.4% (here, the total number of data were 388,639 samples). Our experimental results still need to improve accuracy but show that geology information classification technique based on TBM operating data could be utilized in the real environment to complement the sparse ground information.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Building a Model for Estimate the Soil Organic Carbon Using Decision Tree Algorithm (의사결정나무를 이용한 토양유기탄소 추정 모델 제작)

  • Yoo, Su-Hong;Heo, Joon;Jung, Jae-Hoon;Han, Su-Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.3
    • /
    • pp.29-35
    • /
    • 2010
  • Soil organic carbon (SOC), being a help to forest formation and control of carbon dioxide in the air, is found to be an important factor by which global warming is influenced. Excavating the samples by whole area is very inefficient method to discovering the distribution of SOC. So, the development of suitable model for expecting the relative amount of the SOC makes better use of expecting the SOC. In the present study, a model based on a decision tree algorithm is introduced to estimate the amount of SOC along with accessing influencing factors such as altitude, aspect, slope and type of trees. The model was applied to a real site and validated by 10-fold cross validation using two softwares, See 5 and Weka. From the results given by See 5, it can be concluded that the amount of SOC in surface layers is highly related to the type of trees, while it is, in middle depth layers, dominated by both type of trees and altitude. The estimation accuracy was rated as 70.8% in surface layers and 64.7% in middle depth layers. A similar result was, in surface layers, given by Weka, but aspect was, in middle depth layers, found to be a meaningful factor along with types of trees and altitude. The estimation accuracy was rated as 68.87% and 60.65% in surface and middle depth layers. The introduced model is, from the tests, conceived to be useful to estimation of SOC amount and its application to SOC map production for wide areas.

Calibration of Portable Particulate Mattere-Monitoring Device using Web Query and Machine Learning

  • Loh, Byoung Gook;Choi, Gi Heung
    • Safety and Health at Work
    • /
    • v.10 no.4
    • /
    • pp.452-460
    • /
    • 2019
  • Background: Monitoring and control of PM2.5 are being recognized as key to address health issues attributed to PM2.5. Availability of low-cost PM2.5 sensors made it possible to introduce a number of portable PM2.5 monitors based on light scattering to the consumer market at an affordable price. Accuracy of light scatteringe-based PM2.5 monitors significantly depends on the method of calibration. Static calibration curve is used as the most popular calibration method for low-cost PM2.5 sensors particularly because of ease of application. Drawback in this approach is, however, the lack of accuracy. Methods: This study discussed the calibration of a low-cost PM2.5-monitoring device (PMD) to improve the accuracy and reliability for practical use. The proposed method is based on construction of the PM2.5 sensor network using Message Queuing Telemetry Transport (MQTT) protocol and web query of reference measurement data available at government-authorized PM monitoring station (GAMS) in the republic of Korea. Four machine learning (ML) algorithms such as support vector machine, k-nearest neighbors, random forest, and extreme gradient boosting were used as regression models to calibrate the PMD measurements of PM2.5. Performance of each ML algorithm was evaluated using stratified K-fold cross-validation, and a linear regression model was used as a reference. Results: Based on the performance of ML algorithms used, regression of the output of the PMD to PM2.5 concentrations data available from the GAMS through web query was effective. The extreme gradient boosting algorithm showed the best performance with a mean coefficient of determination (R2) of 0.78 and standard error of 5.0 ㎍/㎥, corresponding to 8% increase in R2 and 12% decrease in root mean square error in comparison with the linear regression model. Minimum 100 hours of calibration period was found required to calibrate the PMD to its full capacity. Calibration method proposed poses a limitation on the location of the PMD being in the vicinity of the GAMS. As the number of the PMD participating in the sensor network increases, however, calibrated PMDs can be used as reference devices to nearby PMDs that require calibration, forming a calibration chain through MQTT protocol. Conclusions: Calibration of a low-cost PMD, which is based on construction of PM2.5 sensor network using MQTT protocol and web query of reference measurement data available at a GAMS, significantly improves the accuracy and reliability of a PMD, thereby making practical use of the low-cost PMD possible.

Panamax Second-hand Vessel Valuation Model (파나막스 중고선가치 추정모델 연구)

  • Lim, Sang-Seop;Lee, Ki-Hwan;Yang, Huck-Jun;Yun, Hee-Sung
    • Journal of Navigation and Port Research
    • /
    • v.43 no.1
    • /
    • pp.72-78
    • /
    • 2019
  • The second-hand ship market provides immediate access to the freight market for shipping investors. When introducing second-hand vessels, the precise estimate of the price is crucial to the decision-making process because it directly affects the burden of capital cost to investors in the future. Previous studies on the second-hand market have mainly focused on the market efficiency. The number of papers on the estimation of second-hand vessel values is very limited. This study proposes an artificial neural network model that has not been attempted in previous studies. Six factors, freight, new-building price, orderbook, scrap price, age and vessel size, that affect the second-hand ship price were identified through literature review. The employed data is 366 real trading records of Panamax second-hand vessels reported to Clarkson between January 2016 and December 2018. Statistical filtering was carried out through correlation analysis and stepwise regression analysis, and three parameters, which are freight, age and size, were selected. Ten-fold cross validation was used to estimate the hyper-parameters of the artificial neural network model. The result of this study confirmed that the performance of the artificial neural network model is better than that of simple stepwise regression analysis. The application of the statistical verification process and artificial neural network model differentiates this paper from others. In addition, it is expected that a scientific model that satisfies both statistical rationality and accuracy of the results will make a contribution to real-life practices.

Forecasting the Precipitation of the Next Day Using Deep Learning (딥러닝 기법을 이용한 내일강수 예측)

  • Ha, Ji-Hun;Lee, Yong Hee;Kim, Yong-Hyuk
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.2
    • /
    • pp.93-98
    • /
    • 2016
  • For accurate precipitation forecasts the choice of weather factors and prediction method is very important. Recently, machine learning has been widely used for forecasting precipitation, and artificial neural network, one of machine learning techniques, showed good performance. In this paper, we suggest a new method for forecasting precipitation using DBN, one of deep learning techniques. DBN has an advantage that initial weights are set by unsupervised learning, so this compensates for the defects of artificial neural networks. We used past precipitation, temperature, and the parameters of the sun and moon's motion as features for forecasting precipitation. The dataset consists of observation data which had been measured for 40 years from AWS in Seoul. Experiments were based on 8-fold cross validation. As a result of estimation, we got probabilities of test dataset, so threshold was used for the decision of precipitation. CSI and Bias were used for indicating the precision of precipitation. Our experimental results showed that DBN performed better than MLP.

Application of Machine Learning to Predict Weight Loss in Overweight, and Obese Patients on Korean Medicine Weight Management Program (한의 체중 조절 프로그램에 참여한 과체중, 비만 환자에서의 머신러닝 기법을 적용한 체중 감량 예측 연구)

  • Kim, Eunjoo;Park, Young-Bae;Choi, Kahye;Lim, Young-Woo;Ok, Ji-Myung;Noh, Eun-Young;Song, Tae Min;Kang, Jihoon;Lee, Hyangsook;Kim, Seo-Young
    • The Journal of Korean Medicine
    • /
    • v.41 no.2
    • /
    • pp.58-79
    • /
    • 2020
  • Objectives: The purpose of this study is to predict the weight loss by applying machine learning using real-world clinical data from overweight and obese adults on weight loss program in 4 Korean Medicine obesity clinics. Methods: From January, 2017 to May, 2019, we collected data from overweight and obese adults (BMI≥23 kg/m2) who registered for a 3-month Gamitaeeumjowi-tang prescription program. Predictive analysis was conducted at the time of three prescriptions, and the expected reduced rate and reduced weight at the next order of prescription were predicted as binary classification (classification benchmark: highest quartile, median, lowest quartile). For the median, further analysis was conducted after using the variable selection method. The data set for each analysis was 25,988 in the first, 6,304 in the second, and 833 in the third. 5-fold cross validation was used to prevent overfitting. Results: Prediction accuracy was increased from 1st to 2nd and 3rd analysis. After selecting the variables based on the median, artificial neural network showed the highest accuracy in 1st (54.69%), 2nd (73.52%), and 3rd (81.88%) prediction analysis based on reduced rate. The prediction performance was additionally confirmed through AUC, Random Forest showed the highest in 1st (0.640), 2nd (0.816), and 3rd (0.939) prediction analysis based on reduced weight. Conclusions: The prediction of weight loss by applying machine learning showed that the accuracy was improved by using the initial weight loss information. There is a possibility that it can be used to screen patients who need intensive intervention when expected weight loss is low.