참고문헌
- T. Gu, B. D-G, and S. Garg, "BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain", [Online]. Available: https://arxiv.org/abs/1708.06733
- R. R. W, A. Xu, O, Dia, A. Berker, "Adversarial Examples in Modern Machine Learning: A Review", [Online]. Available: https://arxiv.org/abs/1911.05268
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," in International Conference on Learning Representations, 2014. [Online]. Available: http://arxiv.org/abs/1312.6199
- I. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in International Conference on Learning Representations, 2015. [Online]. Available: http://arxiv.org/abs/1412.6572
- A. Kurakin, I. J. Goodfellow, and S. Bengio, "Adversarial machine learning at scale," 2017. [Online]. Available: https://arxiv.org/abs/1611.01236
- A. Kurakin, I. J. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," CoRR, vol. abs/1607.02533, 2016. [Online]. Available: http://arxiv.org/abs/1607.02533
- N. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against deep learning systems using adversarial examples," CoRR, vol. abs/ 1602.02697, 2016. [Online]. Available: http://arxiv.org/abs/1602.02697
- S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, "Universal adversarial perturbations," CoRR, vol. abs/1610.08401, 2016. [Online]. Available: http://arxiv.org/abs/1610.08401
- J. Kos, I. Fischer, and D. Song, "Adversarial examples for generative models," CoRR, vol. abs/1702.06832, 2017. [Online]. Available: http://arxiv.org/abs/1702.06832
- A. Kurakin, I. J. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," CoRR, vol. abs/1607.02533, 2016. [Online]. Available: http://arxiv.org/abs/1607.02533
- M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition," in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS '16. New York, NY, USA: ACM, 2016, pp. 1528-1540. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978392
- I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song, "Robust physical-world attacks on machine learning models," CoRR, vol. abs/1707.08945, 2017. [Online]. Available: http://arxiv.org/abs/1707.08945
- A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, "Synthesizing robust adversarial examples," in Proceedings of the 35th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, J. Dy and A. Krause, Eds., vol. 80. StockholmsmAd'ssan, Stockholm Sweden: PMLR, 10-15 Jul 2018, pp. 284-293. [Online]. Available: http://proceedings.mlr.press/v80/athalye18b.html
- S. Gu and L. Rigazio, "Towards deep neural network architectures robust to adversarial examples," CoRR, vol. abs/1412.5068, 2014. [Online]. Available: http://arxiv.org/abs/1412.5068
- F. Tramer, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, "Ensemble adversarial training: Attacks and defenses," in International Conference on Learning Representations,2018. [Online]. Available: https://openreview.net/forum?id=rkZvSe-RZ
- C. Xie, J. Wang, Z. Zhang, Z. Ren, and A. Yuille, "Mitigating adversarial effects through randomization," in International Conference on Learning Representations, 2018. [Online]. Available: https://openreview.net/forum?id=Sk9yuql0Z
- Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Ankur Moitra, Alistair Stewart, "Robust Estimators in High Dimensions without the Computational Intractability", Available: https://arxiv.org/abs/1604.06443