과제정보
이 연구는 2020년도 정부 (과학기술정보통신부)의 재원으로 한국연구재단의 지원과 2020년도 정부 (산업통상자원부)의 재원으로 한국산업기술진흥원의 지원을 받아 수행된 연구임 (NRF-2020R1A2C1014768 중견 연구자지원사업, P0012724, 산업혁신인재성장지원사업)
참고문헌
- I.J. Goodfellow, J. Shlens, C. Szegedy, "Explaining and Harnessing Adversarial Examples," International Conference on Learning Representations(ICLR), 2015.
- A. Nguyen, J. Yosinski, J. Clune, "Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images," Computer Vision and Pattern Recognition(CVPR), pp. 427-436, 2015.
- H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, "Mixup: Beyond Empirical Risk Minimization," International Conference on Learning Representations(ICLR), 2018.
- K. Lee, H. Lee, K. Lee, J. Shin, "Training Confidence-Calibrated Classifiers for Detecting Out-of-Distribution Samples," International Conference on Learning Representations(ICLR), 2018.
- D. Hendrycks, M. Mazeika, T. Dietterich, "Deep Anomaly Detection with Outlier Expousre," International Conference on Learning Representations(ICLR), 2019.
- S. Hawkins, H. He, G. Williams, R. Baxter, "Outlier Detection Using Replicator Neural Networks," International Conference on Data Warehousing and Knowledge Discovery, Springer, pp. 170-180, 2002.
- L. Ruff, R.A. Vandermeulen, N. Gornitz, L. Deecke, S.A. Siddiqui, A. Binder, E, Müller, M. Kloft, "Deep One-Class Classification," Proceedings of the 35th International Conference on Machine Learning(ICML), pp. 4393-4402, 2018.
- J. Ren, P.J. Liu, E. Fertig, J. Snoek, R. Poplin, M. Depristo, J. Dillon, B. Lakshminarayanan, "Likelihood Ratios for Out-of-Distribution Detection," 33rd Conference on Neural Information Processing Systems(NeurlIPS), pp. 14707-14718, 2019.
- D. Hendrycks, K. Gimpel, "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks," International Conference on Learning Representations(ICLR), 2017.
- S. Liang, Y. Li, R. Srikant, "Enhancing The Reliability of Out-of-Distribution Image Detection in Neural Networks," International Conference on Learning Representations(ICLR), 2018.
- C. Guo, G. Pleiss, Y. Sun, K.Q. Weinberger, "On Calibration of Modern Neural Networks," Proceedings of the 34th International Conference on Machine Learning(ICML), 2017.
- K. Gwon, J. Yoo, "Out-of-Distribution Data Detection Using Mahalanobis Distance for Reliable Deep Neural Networks," Proceedings of 2020 IEMEK Symposium on Embedded Technology(ISET 2020), 2020 (in Korean).
- K. Lee, K. Lee, H. Lee, J. Shin, "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks," 32nd Conference on Neural Information Processing Systems(NeurlIPS), pp. 7167-7177, 2018.
- S. Thulasidasan, G. Chennupati, J. Bilmes, T. Bhattacharya, S. Michalak, "On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks," 33rd Conference on Neural Information Processing Systems(NeurlIPS), 2019.
- A. Kurakin, I.J. Goodfellow, S. Bengio, "Adversarial Examples in the Physical World," International Conference on Learning Representations(ICLR), 2017.
- C. Szgedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I.J. Goodfellow, R. Fergus, "Intriguing Properties of Neural Networks," Computer Vision and Pattern Recognition(CVPR), 2014.
- A. Laugros, A. Caplier, M. Ospici, "Addressing Neural Network Robustness with Mixup and Targete Labeling Adversarial Training," European Conference on Computer Vision(ECCV), 2020.