References
- Brasil, P. (2010). Diagnostic test accuracy evaluation for medical professionals, Package DiagnosisMed in R.
- Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms, Pattern Recognitions, 30, 1145-1159. https://doi.org/10.1016/S0031-3203(96)00142-2
- Cantor, S. B., Sun, C. C., Tortolero-Luna, G., Richards-Kortum, R., and Follen, M. (1999). A comparison of C/B ratios from studies using receiver operating characteristic curve analysis, Journal of Clinical Epidemiology, 52, 885-892. https://doi.org/10.1016/S0895-4356(99)00075-X
- Centor, R. N. (1991). Signal detectability: The use of ROC curves and their analyses, Medical Decision Making, 11, 102-106. https://doi.org/10.1177/0272989X9101100205
- Connell, F. A. and Koepsell, T. D. (1985). Measures of gain in certainty from a diagnostic test, American Journal of Epidemiology, 121, 744-753. https://doi.org/10.1093/aje/121.5.744
- Egan, J. P. (1975). Signal detection theory and ROC analysis, Academic Press, New York.
- Engelmann, B., Hayden, E., and Tasche, D. (2003). Testing rating accuracy, Risk, 16, 82-86.
- Fawcett, T. (2006). An introduction to ROC analysis, Pattern Recognition Letters, 27, 861-874. https://doi.org/10.1016/j.patrec.2005.10.010
- Fawcett, T. and Provost, F. (1997). Adaptive fraud detection, Data Mining and Knowledge Discovery, 1, 291-316. https://doi.org/10.1023/A:1009700419189
- Freeman, E. A. and Moisen, G. G. (2008). A comparison of the performance of threshold criteria for binary classification in terms of predicted prevalence and kappa, Ecological Modelling, 217, 48-58. https://doi.org/10.1016/j.ecolmodel.2008.05.015
- Greiner, M. M. and Gardner, I. A. (2000). Epidemiologic issues in the validation of veterinary diagnostic tests, Preventive Veterinary Medicine, 45, 3-22. https://doi.org/10.1016/S0167-5877(00)00114-8
- Hanley, J. A. and McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve, Radiology, 143, 29-36. https://doi.org/10.1148/radiology.143.1.7063747
- Hong, C. S. and Lee S. J. (2018). TROC curve and accuracy measures, Journal of the Korean Data & Information Science Society, 29, 861-872. https://doi.org/10.7465/jkdi.2018.29.4.861
- Hong, C. S., Joo, J. S., and Choi, J. S. (2010). Optimal thresholds from mixture distributions, The Korean Journal of Applied Statistics, 23(1), 13-28. https://doi.org/10.5351/KJAS.2010.23.1.013
- Hong, C. S., Lin, M. H., Hong, S. W., and Kim, G. C. (2011). Classification accuracy measures with minimum error rate for normal mixture, Journal of the Korean Data & Information Science Society, 22, 619-630.
- Hsieh, F. and Turnbull, B. W. (1996). Nonparametric and semiparametric estimation of the receiver operating characteristic curve, The Annals of Statistics, 24, 25-40. https://doi.org/10.1214/aos/1033066197
- Krzanowski, W. J. and Hand, D. J. (2009). ROC Curves for Continuous Data, Chapman & Hall/CRC, Boca Raton.
- Lambert, J. and Lipkovich, I. (2008). A macro for getting more out of your ROC curve, SAS Global Forum, 231.
- Liu, C., White, M., and Newell1, G. (2009). Measuring the accuracy of species distribution models: a review. In Proceedings 18th World IMACs/MODSIM Congress, 4241, 4247.
- Metz, C. E. and Kronman H. B. (1980). Statistical significance tests for binormal ROC curves, Journal of Mathematical Psychology, 22, 218-243. https://doi.org/10.1016/0022-2496(80)90020-6
- Moses, L. E., Shapiro, D., and Littenberg, B. (1993). Combining independent studies of a diagnostic test into a summary ROC curve: Data-analytic approaches and some additional considerations, Statistics in Medicine, 12, 1293-1316. https://doi.org/10.1002/sim.4780121403
- Pepe, M. S. (2000). Receiver operating characteristic methodology, Journal of the American Statistical Association, 95, 308-311. https://doi.org/10.1080/01621459.2000.10473930
- Pepe, M. S. (2003). The Statistical Evaluation of Medical Tests for Classication and Prediction, Oxford University Press, Oxford.
- Perkins, N. J. and Schisterman, E. F. (2006). The inconsistency of "optimal" cutpoints obtained using two criteria based on the receiver operating characteristic curve, American Journal of Epidemiology, 163, 670-675. https://doi.org/10.1093/aje/kwj063
- Provost, F. and Fawcett, T. (2001). Robust classification for imprecise environments, Machine Learning, 42, 203-231. https://doi.org/10.1023/A:1007601015854
- Spackman, K. A. (1989). Signal detection theory: valuable tools for evaluating inductive learning, The Analytics of Risk Model Validation, San Mateo, 160-163.
- Tasche, D. (2006). Validation of internal rating systems and PD estimates, The Analytics of Risk Model Validation, 169-196.
- Unal, I. (2017). Defining an optimal cut-point value in ROC analysis: an alternative approach, Computational & Mathematical Methods in Medicine, 2017, 1-14. https://doi.org/10.1155/2017/3762651
- Vuk, M. and Curk, T. (2006). ROC curve, lift chart and calibration plot, Metodoloski Zvezki, 3, 89-108.
- Yoo, H. S. and Hong, C. S. (2011). Optimal criterion of classification accuracy measures for normal mixture, Communications for Statistical Applications and Methods, 18, 343-355. https://doi.org/10.5351/CKSS.2011.18.3.343
- Youden, W. J. (1950). Index for rating diagnostic test, Cancer, 3, 32-35. https://doi.org/10.1002/1097-0142(1950)3:1<32::AID-CNCR2820030106>3.0.CO;2-3
- Zweig, M. and Campbell, G. (1993). Receiver-operating characteristics (ROC) plots: A fundamental evaluation tool in clinical medicine, Clinical Chemistry, 39, 561-577. https://doi.org/10.1093/clinchem/39.4.561