Acknowledgement
This study was supported by the Research Program funded by the SeoulTech (Seoul National University of Science and Technology).
References
- An, A. and Cercone, N., Rule Quality Measures for Rule Induction Systems, Computational Intelligence, 2001, Vol. 17, No. 3, pp. 409-424. https://doi.org/10.1111/0824-7935.00154
- Baesens, B., Mues, C., De Backer, M., and Vanthienen, J., Building Intelligent Credit Scoring Systems Using Decision Tables, In : Enterprise Information Systems V, 2004, pp. 131-137.
- Belle, V. and Papantonis, I., Principles and Practice of Explainable Machine Learning, arXiv preprint arXiv: 2009.11698, 2020.
- Breiman, L., Friedman, J.H., Olshen, R.A., and Stone, C.J., Classification and Regression Trees, Monterey, CA, USA : Wadsworth and Brooks, 1984.
- Breiman, L., Heuristics of Instability and Stabilization in Model Selection, The Annals of Statistics, 1996, Vol. 24, No. 6, pp. 2350-2383. https://doi.org/10.1214/aos/1032181158
- Briand, B., Ducharme, G.R., Parache, V., and Mercat-Rommens, C., A Similarity Measureto Assess the Stability of Classification Trees, Computational Statistics and Data Analysis, 2009, Vol. 53, No. 4, pp. 1208-1217. https://doi.org/10.1016/j.csda.2008.10.033
- Buntine, W. and Niblett, T., A Further Comparison of Splitting Rules for Decision-Tree Induction, Machine Learning, 1992, Vol. 8, No. 1, pp. 75-85. https://doi.org/10.1007/BF00994006
- Cano, A., Zafra, A., and Ventura, S., An Interpretable Classification Rule Mining Algorithm, Information Sciences, 2013, Vol. 240, pp. 1-20. https://doi.org/10.1016/j.ins.2013.03.038
- Chandra, B., Kothari, R., and Paul, P., A New Node Splitting Measure for Decision Tree Construction, Pattern Recognition, 2010, Vol. 43, No. 8, pp. 2725-2731. https://doi.org/10.1016/j.patcog.2010.02.025
- Dannegger, F., Tree Stability Diagnotics and Some Remedies for Instability, Statistics in Medicine, 2000, Vol. 19, No. 4, pp. 475-491. https://doi.org/10.1002/(SICI)1097-0258(20000229)19:4<475::AID-SIM351>3.0.CO;2-V
- Freitas, A.A., Comprehensible Classification Models : A Position Paper, ACM SIGKDD Explorations Newsletter, 2014, Vol. 15, No. 1, pp. 1-10. https://doi.org/10.1145/2594473.2594475
- Garcia, S., Fernandez, A., and Herrera, F., Enhancing the Effectiveness and Interpretability of Decision Tree and Rule Induction Classifiers with Evolutionary Training Set Selection over Imbalanced Problems, Applied Soft Computing, 2009, Vol. 9, No. 4, pp. 1304-1314. https://doi.org/10.1016/j.asoc.2009.04.004
- Goldstein, A. and Buja, A., Penalized Split Criteria for Interpretable Trees, arXiv preprint arXiv:1310.5677, 2013.
- Harris, E., Information Gain Versus Gain Ratio : A Study of Split Method Biases, The MITRE Corp., McLean, VI, USA, Tech. Rep., 2001.
- Jacobucci, R., Decision Tree Stability and its Effect on Interpretation, Retrieved from osf.io/m5p2v, 2018.
- Jaworski, M., Duda, P., and Rutkowski, L., New Splitting Criteria for Decision Trees in Stationary Data Streams, IEEE Trans. Neural Netw. Learn. Syst., 2018, Vol. 29, No. 6, pp. 2516-2529. https://doi.org/10.1109/TNNLS.2017.2698204
- Kotsiantis, S.B., Supervised Machine Learning : A Review of Classification Techniques, Informatica, 2007, Vol. 31, No. 3, pp. 249-268.
- Leiva, R.G., Anta, A.F., Mancuso, V., and Casari, P., A Novel Hyperparameter-Free Approach to Decision Tree Construction that Avoids Overfitting by Design, IEEE Access, 2019, Vol. 7, pp. 99978-99987. https://doi.org/10.1109/ACCESS.2019.2930235
- Li, R.-H. and Belford, G.G., Instability of Decision Tree Classification Algorithms, In Proceedings of the 8 th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Edmonton, Alberta, Canada, 2002, pp. 570-575.
- Lustrek, M., Gams, M., and Martincic-Ipsic, S., What Makes Classification Trees Comprehensible?, Expert Syst. Appl., 2016, Vol. 62, pp. 333-346. https://doi.org/10.1016/j.eswa.2016.06.009
- Martens, D., Vanthienen, J., Verbeke, W., and Baesens, B., Performance of Classification Models from a User Perspective, Decision Support Systems, 2011, Vol. 51, No. 4, pp. 782-793. https://doi.org/10.1016/j.dss.2011.01.013
- Murdoch, W.J., Singh, C., Kumbier, K., Abbasi-Asl, R., and Yu, B., Interpretable Machine Learning : Definitions, Methods, and Applications, arXiv e-prints, p. arXiv:1901.04592, 2019.
- Norouzi, M., Collins, M., Johnson, M.A., Fleet, D.J., and Kohli, P., Efficient Non-Greedy Optimization of Decision Trees, Proceedings of the 28th International Conference on Neural Informsation Processing, 2015, pp. 1729-1737.
- Pazzani, M.J., Mani, S., and Shankle, W.R., Acceptance of Rules Generated by Machine Learning Among Medical Experts, Methods of Information in Medicine, 2001, Vol. 40, No. 5, pp. 380-385. https://doi.org/10.1055/s-0038-1634196
- Quinlan, J.R., C4.5 : Programs for Machine Learning, San Francisco, CA, USA : Morgan Kaufmann, 1993.
- Quinlan, J.R., Induction of Decision Trees, Machine Learning, 1986, Vol. 1, No. 1, pp. 81-106. https://doi.org/10.1007/BF00116251
- Verbeke, W., Marteens, D., Mues, C., and Baesens, B., Building Comprehensible Customer Churn Prediction Models with Advanced Rule Induction Techniques, Expert Systems with Applications, 2011, Vol. 38, No. 3, pp. 2354-2364. https://doi.org/10.1016/j.eswa.2010.08.023
- Wang, Y. and Xia, S.-T., Unifying Attribute Splitting Criteria of Decision Trees by Tsallis Entropy, in Proc. IEEE Int. Conf. Acoust., Speech Signal Process(ICASSP), 2017, pp. 2507-2511.
- Wang, Y., Xia, S.-T., and Wu, J., A Less-Greedy Two-Term Tsallis Entropy Information Metric Approach for Decision Tree Classification, Knowledge-Based Systems, 2017, Vol. 120, pp. 34-42.
- Zhao, Y. and Zhang, Y., Comparison of Decision Tree Methods for Finding Active Objects, Advances in Space Research, 2008, Vol. 41, No. 12, pp. 1955-1959. https://doi.org/10.1016/j.asr.2007.07.020