DOI QR코드

DOI QR Code

부스팅 인공신경망학습의 기업부실예측 성과비교

An Empirical Analysis of Boosing of Neural Networks for Bankruptcy Prediction

  • 김명종 (동서대학교 경영학부) ;
  • 강대기 (동서대학교 컴퓨터정보공학부)
  • 발행 : 2010.01.30

초록

최근 기계학습 분야에서 분류자의 정확도 개선을 위하여 제안된 다양한 방법들 중 가장 큰 주목을 받고 있는 학습방법 중 하나는 앙상블 학습이다. 그러나 앙상블 학습은 의사결정트리와 같이 불안정한 학습 알고리즘의 성과 개선 효과는 탁월한 반면, 인공신경망과 같이 안정적인 학습알고리즘의 성과 개선 효과는 응용 분야와 구현 방법에 따라 서로 상반된 결론들을 보여주고 있다. 본 연구에서는 국내 기업의 부실화 예측문제를 활용하여 인공신경 망 분류자 및 대표적 앙상블 학습기법인 부스팅 분류자를 적용한 결과 앙상블 학습은 기업부실 예측문제에 있어 전통적 인공신경망의 성과를 개선할 수 있음을 검증하였다.

Ensemble is one of widely used methods for improving the performance of classification and prediction models. Two popular ensemble methods, Bagging and Boosting, have been applied with great success to various machine learning problems using mostly decision trees as base classifiers. This paper performs an empirical comparison of Boosted neural networks and traditional neural networks on bankruptcy prediction tasks. Experimental results on Korean firms indicated that the boosted neural networks showed the improved performance over traditional neural networks.

키워드

참고문헌

  1. Alfaro, E., Gamez, M., & Garcia, N. (2007). Multiclass corporate failure prediction by AdaBoost.M1. Advanced Economic Research, 13, 301-312. https://doi.org/10.1007/s11294-007-9090-2
  2. Alfaro, E., Garcia, N., Gamez, M., & Elizondo, D. (2008). Bankruptcy forecasting: an empirical comparison of AdaBooost and neural networks. Decision Support Systems, 45, 110-122. https://doi.org/10.1016/j.dss.2007.12.002
  3. Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36, 105-139. https://doi.org/10.1023/A:1007515423169
  4. Breiman, L. (1994). Bagging predictors. Machine Learning, 24(2), 123-140.
  5. Breiman, L. (1996). Bias, variance, and arcing classifiers (Tech.Rep.No.460). Berkeley: Statistics Department, University of California at Berkeley.
  6. Buciu, I., Kotrooulos, C., & Pitas, I. (2001). Combining support vector machines for accuracy face detection, Proc. ICIP, 1054-1057.
  7. Dong, Y. S., & Han, K. S. (2004). A comparison of several ensemble methods for text categorization, IEEE International Conference on Service Computing.
  8. Drucker, H., & Cortes, C. (1996). Boosting decision trees, Advanced Neural Information Processing Systems, 8.
  9. Evgeniou, T., Perez-Breva, L., Pontil, M., & Poggio, T. (2000). Bound on the generalization performance of kernel machine ensembles, Proc. ICMI, 271-278.
  10. Fawcett, T. (2006). An introduction to ROC analysis. Pattern Recognition Letters, 27, 861-874. https://doi.org/10.1016/j.patrec.2005.10.010
  11. Freund, Y. (1995). Boosting a weak learning algorithm by majority. Information and Computation, 121(2), 256-285. https://doi.org/10.1006/inco.1995.1136
  12. Freund, Y., & Schapire, R E. (1996). Experiments with a new boosting algorithm. Machine Learning: Proceedings of Thirteenth International Conference, 148-156.
  13. Freund, Y., & Schapire, R. E. (1997). A decision theoretic generalization of online learning and an application to boosting. Journal of Computer and System Science, 55(1), 119-139. https://doi.org/10.1006/jcss.1997.1504
  14. Hansen, L., & Salamon, P. (1990). Neural network ensembles, IEEE Trans. PAMI, Vol.12, 993-1001.
  15. Kim, H. C., Pang, S., Je, H. M., K, D. J., & Bang, S. Y. (2003). Constructing support vector machine ensemble, Pattern Recognition, 36, 2757-2767. https://doi.org/10.1016/S0031-3203(03)00175-4
  16. Maclin, R, & Opitz, D. (1997). An empirical evaluation of bagging and boosting. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, 546-551.
  17. Optiz, D., & Maclin, R (1999). Popular ensemble methods: an empirical study. Journal of Artificial Intelligence, 11, 169-198.
  18. Quinlan, J. R. (1996). Bagging, boosting and C4.5. Machine Learning: Proceedings of the Fourteenth International Conference, 725-730.

피인용 문헌

  1. A Comparative Study on Failure Pprediction Models for Small and Medium Manufacturing Company vol.11, pp.3, 2010, https://doi.org/10.16972/apjbve.11.3.201606.1