DOI QR코드

DOI QR Code

Quantitative Estimation Method for ML Model Performance Change, Due to Concept Drift

Concept Drift에 의한 ML 모델 성능 변화의 정량적 추정 방법

  • 안순홍 (아시아나IDT AI빅데이터연구소) ;
  • 이훈석 (아시아나IDT AI빅데이터연구소) ;
  • 김승훈 (단국대학교 컴퓨터공학과)
  • Received : 2022.09.30
  • Accepted : 2023.01.25
  • Published : 2023.06.30

Abstract

It is very difficult to measure the performance of the machine learning model in the business service stage. Therefore, managing the performance of the model through the operational department is not done effectively. Academically, various studies have been conducted on the concept drift detection method to determine whether the model status is appropriate. The operational department wants to know quantitatively the performance of the operating model, but concept drift can only detect the state of the model in relation to the data, it cannot estimate the quantitative performance of the model. In this study, we propose a performance prediction model (PPM) that quantitatively estimates precision through the statistics of concept drift. The proposed model induces artificial drift in the sampling data extracted from the training data, measures the precision of the sampling data, creates a dataset of drift and precision, and learns it. Then, the difference between the actual precision and the predicted precision is compared through the test data to correct the error of the performance prediction model. The proposed PPM was applied to two models, a loan underwriting model and a credit card fraud detection model that can be used in real business. It was confirmed that the precision was effectively predicted.

기계학습을 통해 학습된 모델은 업무 활용 시 그 성능을 실측하기 매우 어렵다. 때문에 운영 부서에서는 모델의 성능을 효과적으로 관리하지 못한다. 이로 인해 모델의 상태를 판단하기 위한 Concept drift 탐지 방법이 다양하게 연구되고 있다. 운영 부서에서는 운영 중인 모델의 성능을 정량적으로 관리하려고 한다. 그러나 Concept drift는 모델 상태를 데이터 관계적으로 판단 할 뿐, 모델의 정량적 성능 수치를 추정하지는 못한다. 본 연구에서는 Concept drift의 통계량을 통해 정량적으로 precision 값을 추정하는 성능 예측 모델(PPM, Performance prediction model)을 제안한다. 제안 모델의 Algorithm 1에서는, 학습데이터에서 복원 추출한 샘플링 데이터에 인위적인 drift를 유도하고 이때의 precision을 측정하여 drift와 precision의 데이터 셋을 만들어 학습한다. Algorithm 2에서는 테스트 데이터를 통해 실제 precision과 예측 precision의 차이를 측정하여 성능 예측 모델의 오차를 보정 한다. 현실 비즈니스에서 사용될 수 있는 대출 심사 모델과 신용카드 오사용 탐지 모델에 PPM을 적용하여 성능 예측의 유효성을 확인했다.

Keywords

References

  1. BHOWMIK, Pritom. Machine Learning in Production: From Experimented ML Model to System. 2022.
  2. Machine Learning Monitoring [Internet], https://evidentlyai.com/blog/machine-learning-monitoring-data-and-oncept-drift
  3. Data Drift vs. Concept Drift [Internet], https://deepchecks.com/data-drift-vs-concept-drift- what-are-the-main-differences/
  4. M. Salganicoff, "Tolerating concept and sampling shift in lazy learning using prediction error context switching," Artificial Intelligence Review, Vol.11, No.1-5, pp.133-155, 1997. https://doi.org/10.1023/A:1006515405170
  5. S. Rabanser, S. Gunnemann, and Z. Lipton, "Failing loudly: An empirical study of methods for detecting dataset shift," Advances in Neural Information Processing Systems, Vol.32, 2019.
  6. L. I. Kuncheva, "Classifier ensembles for changing environments," In Multiple Classifier Systems: 5th International Workshop, MCS 2004, Cagliari, Italy, June 9-11, 2004. Proceedings 5. Springer Berlin Heidelberg, pp.1-15, 2004.
  7. L. Kuncheva, "Classifier ensembles for detecting concept change in streaming data: Overview and perspectives," In Proceedings of the 2nd Workshop SUEMA 2008. 2008.
  8. J. Gama, P. Medas, G. Castillo, and P. Rodrigues, "Learning with drift detection," in Advances in Artificial Intelligence: SBIA 2004, in: vol. 3171 of (LNCS), Springer, pp.286-295, 2004.
  9. M. Baena-Garcia, J. Del Campo-Avila, R. Fidalgo, A. Bifet, R. Gavalda, and R. Morales-Bueno, "Early drift detection method," in International Workshop on Knowledge Discovery from Data Streams, pp.77-86, 2006.
  10. A. Bifet and R. Gavalda, "Learning from time-changing data with adaptive windowing," in Proceedings of the 7th SIAM International Conference on Data Mining (SDM'07), Minneapolis, MN, USA, pp.443-448, 2007.
  11. R. Pears, S. Sakthithasan, and Y. Koh, "Detecting concept change in dynamic data streams," Machine Learning, Vol.97, No.3, pp.259-293, 2014. https://doi.org/10.1007/s10994-013-5433-9
  12. S. Bernstein, "The theory of probabilities," Gastehizdat Publishing House, Moscow, 1946.
  13. D. T. J. Huang, Y. S. Koh, G. Dobbie, and R. Pears, "Detecting volatility shift in data streams," in Proceedings of 2014 IEEE International Conference on Data Mining ICDM, Shenzhen, China, pp.863-868, 2014.
  14. K. Nishida and K. Yamauchi, "Detecting concept drift using statistical testing," in Proceedings of 10th International Conference on Discovery Science (DS'07), in: Vol.4755 of (LNCS), Springer, pp.264-269, 2007.
  15. R. S. M. Barros, "Advances in data stream mining with concept drift," Professorship (Full) Thesis, Centro de Informatica, Universidade Federal de Pernambuco, Brazil, 2017.
  16. R. S. M. Barros, J. I. G. Hidalgo, and D. R. L. Cabral, "Wilcoxon rank sum test drift detector," Neurocomputing, Vol.275, pp.1954-1963, 2018. https://doi.org/10.1016/j.neucom.2017.10.051
  17. D. R. L. Cabral, "Statistical tests and detection of concept drifts in data streams," Centro de Informatica, Universidade Federal de Pernambuco., Portuguese., 2017 (M.Sc. Dissertation).
  18. D. R. L. Cabral and R. S. M. Barros, "Concept drift detection based on fisher's exact test," Information Sciences, Vol.442, pp.220-234, 2018. https://doi.org/10.1016/j.ins.2018.02.054
  19. R. Fisher, "On the interpretation of χ2 from contingency tables, and the calculation of P," Journal of the Royal Statistical Society, Vol.85, No.1, pp.87-94, 1922. https://doi.org/10.2307/2340521
  20. J. Z. Kolter and M. A. Maloof, "Dynamic weighted majority: an ensemble method for drifting concepts," The Journal of Machine Learning Research, Vol.8, pp.2755-2790, 2007.
  21. A. Blum, "Empirical support for winnow and weightedmajority algorithms: Results on a calendar scheduling domain," Machine Learning, Vol.26, No.1, pp.5-23, 1997. https://doi.org/10.1023/A:1007335615132
  22. L. Breiman, "Bagging predictors," Machine Learning, Vol.24, No.2, pp.123-140, 1996. https://doi.org/10.1007/BF00058655
  23. Y. Freund and R. Schapire, "Experiments with a new boosting algorithm," in International Conference on Machine Learning, Vol.96, pp.148-156, 1996.
  24. N. C. Oza and S. Russell, "Online bagging and boosting," in Artificial Intelligence and Statistics, Morgan Kaufman, pp.105-112, 2001.
  25. A. Bifet, G. Holmes, B. Pfahringer, R. Kirkby, and R. Gavalda, "New ensemble methods for evolving data streams, in Proceedings of the 15th ACM International Conference on Knowledge Discovery and Data Mining (KDD'09), Paris, France, pp.139-148, 2009.
  26. A. Bifet, G. Holmes, and B. Pfahringer, "Leveraging bagging for evolving data streams," in Machine Learning and Knowledge Discovery in Databases, in: Vol.6321 of (LNCS), Springer, pp.135-150, 2010.
  27. L. L. Minku and X. Yao, "DDD: a new ensemble approach for dealing with concept drift," IEEE transactions on Knowledge and Data Engineering, Vol.24, No.4, pp.619-633, 2012. https://doi.org/10.1109/TKDE.2011.58
  28. I. Frias-Blanco, A. Verdecia-Cabrera, A. Ortiz-Diaz, and A. Carvalho, "Fast adaptive stack- ing of ensembles," in Proceedings of the 31st ACM Symposium on Applied Computing (SAC'16), Pisa, Italy, pp.929-934, 2016.
  29. A. Beygelzimer, S. Kale, and H. Luo, "Optimal and adaptive algorithms for online boosting," International Conference on Machine Learning. PMLR, 2015.
  30. Y. Freund, "Boosting a weak learning algorithm by majority," Information and Computation, Vol.121, No.2, pp.256-285, 1995. https://doi.org/10.1006/inco.1995.1136
  31. S. G. T. C. Santos, P. M. Goncalves Jr., G. D. S. Silva, and R. S. M. Barros, "Speeding up recovery from concept drifts," in Machine Learning and Knowledge Discovery in Databases, in: Vol. 8726 of (LNCS), Springer, pp.179-194, 2014.
  32. N. G. Nair, P. Satpathy, and J. Christopher, "Covariate shift: A review and analysis on classifiers," In 2019 Global Conference for Advancement in Technology (GCAT). IEEE. pp.1-6, 2019.
  33. H. Shimodaira, "Improving predictive inference under covariate shift by weighting the log-likelihood function," Journal of Statistical Planning and Inference, Vol.90, No.2, pp.227-244, 2000. https://doi.org/10.1016/S0378-3758(00)00115-4
  34. Expectation and Variance [Internet], https://www.stat.auckland.ac.nz/~fewster/325/notes/ch3blank.pdf