• Title/Summary/Keyword: Sequential investment

Search Result 22, Processing Time 0.019 seconds

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

The Effect of Customer Satisfaction on Corporate Credit Ratings (고객만족이 기업의 신용평가에 미치는 영향)

  • Jeon, In-soo;Chun, Myung-hoon;Yu, Jung-su
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.1-24
    • /
    • 2012
  • Nowadays, customer satisfaction has been one of company's major objectives, and the index to measure and communicate customer satisfaction has been generally accepted among business practices. The major issues of CSI(customer satisfaction index) are three questions, as follows: (a)what level of customer satisfaction is tolerable, (b)whether customer satisfaction and company performance has positive causality, and (c)what to do to improve customer satisfaction. Among these, the second issue is recently attracting academic research in several perspectives. On this study, the second issue will be addressed. Many researchers including Anderson have regarded customer satisfaction as core competencies, such as brand equity, customer equity. They want to verify following causality "customer satisfaction → market performance(market share, sales growth rate) → financial performance(operating margin, profitability) → corporate value performance(stock price, credit ratings)" based on the process model of marketing performance. On the other hand, Insoo Jeon and Aeju Jeong(2009) verified sequential causality based on the process model by the domestic data. According to the rejection of several hypotheses, they suggested the balance model of marketing performance as an alternative. The objective of this study, based on the existing process model, is to examine the causal relationship between customer satisfaction and corporate value performance. Anderson and Mansi(2009) proved the relationship between ACSI(American Customer Satisfaction Index) and credit ratings using 2,574 samples from 1994 to 2004 on the assumption that credit rating could be an indicator of a corporate value performance. The similar study(Sangwoon Yoon, 2010) was processed in Korean data, but it didn't confirm the relationship between KCSI(Korean CSI) and credit ratings, unlike the results of Anderson and Mansi(2009). The summary of these studies is in the Table 1. Two studies analyzing the relationship between customer satisfaction and credit ratings weren't consistent results. So, in this study we are to test the conflicting results of the relationship between customer satisfaction and credit ratings based on the research model considering Korean credit ratings. To prove the hypothesis, we suggest the research model as follows. Two important features of this model are the inclusion of important variables in the existing Korean credit rating system and government support. To control their influences on credit ratings, we included three important variables of Korean credit rating system and government support, in case of financial institutions including banks. ROA, ER, TA, these three variables are chosen among various kinds of financial indicators since they are the most frequent variables in many previous studies. The results of the research model are relatively favorable : R2, F-value and p-value is .631, 233.15 and .000 respectively. Thus, the explanatory power of the research model as a whole is good and the model is statistically significant. The research model has good explanatory power, the regression coefficients of the KCSI is .096 as positive(+) and t-value and p-value is 2.220 and .0135 respectively. As a results, we can say the hypothesis is supported. Meanwhile, all other explanatory variables including ROA, ER, log(TA), GS_DV are identified as significant and each variables has a positive(+) relationship with CRS. In particular, the t-value of log(TA) is 23.557 and log(TA) as an explanatory variables of the corporate credit ratings shows very high level of statistical significance. Considering interrelationship between financial indicators such as ROA, ER which include total asset in their formula, we can expect multicollinearity problem. But indicators like VIF and tolerance limits that shows whether multicollinearity exists or not, say that there is no statistically significant multicollinearity in all the explanatory variables. KCSI, the main subject of this study, is a statistically significant level even though the standardized regression coefficients and t-value of KCSI is .055 and 2.220 respectively and a relatively low level among explanatory variables. Considering that we chose other explanatory variables based on the level of explanatory power out of many indicators in the previous studies, KCSI is validated as one of the most significant explanatory variables for credit rating score. And this result can provide new insights on the determinants of credit ratings. However, KCSI has relatively lower impact than main financial indicators like log(TA), ER. Therefore, KCSI is one of the determinants of credit ratings, but don't have an exceedingly significant influence. In addition, this study found that customer satisfaction had more meaningful impact on corporations of small asset size than those of big asset size, and on service companies than manufacturers. The findings of this study is consistent with Anderson and Mansi(2009), but different from Sangwoon Yoon(2010). Although research model of this study is a bit different from Anderson and Mansi(2009), we can conclude that customer satisfaction has a significant influence on company's credit ratings either Korea or the United State. In addition, this paper found that customer satisfaction had more meaningful impact on corporations of small asset size than those of big asset size and on service companies than manufacturers. Until now there are a few of researches about the relationship between customer satisfaction and various business performance, some of which were supported, some weren't. The contribution of this study is that credit rating is applied as a corporate value performance in addition to stock price. It is somewhat important, because credit ratings determine the cost of debt. But so far it doesn't get attention of marketing researches. Based on this study, we can say that customer satisfaction is partially related to all indicators of corporate business performances. Practical meanings for customer satisfaction department are that it needs to actively invest in the customer satisfaction, because active investment also contributes to higher credit ratings and other business performances. A suggestion for credit evaluators is that they need to design new credit rating model which reflect qualitative customer satisfaction as well as existing variables like ROA, ER, TA.

  • PDF