References
- LeCun Y., Bengio Y., Hinton G., "Deep learning", nature, 521(7553), pp. 436-444, May 2015. https://doi.org/10.1038/nature14539
- 김용준, 김영식, "딥 러닝 기술에서의 적대적 학습 기술 동향", 정보과학회지, 36(2), pp. 9-13, 2018.
- Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R, "Intriguing properties of neural networks", arXiv:1312.6199, 2013.
- Goodfellow IJ, Shlens J, Szegedy C, "Explaining and harnessing adversarial examples", arXiv preprint arXiv:1412.6572, 2014.
- Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, "Mastering the game of Go with deep neural networks and tree search", nature, 529(7587), pp.484-489, 2016. https://doi.org/10.1038/nature16961
- Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A, "The limitations of deep learning in adversarial settings", 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp.372-387, 2016.
- Biggio B, Roli F, "Wild patterns: Ten years after the rise of adversarial machine learning", Pattern Recognition, 84, pp.317-331, 2018. https://doi.org/10.1016/j.patcog.2018.07.023
- Moosavi-Dezfooli SM, Fawzi A, Frossard P, "Deepfool: a simple and accurate method to fool deep neural networks", In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.2574-2582, 2016.
- Papernot N, McDaniel P, Wu X, Jha S, Swami A, "Distillation as a defense to adversarial perturbations against deep neural networks", 2016 IEEE Symposium on Security and Privacy (SP), pp.582-597, 2016.
- Shokri R, Stronati M, Song C, Shmatikov V, "Membership inference attacks against machine learning models", 2017 IEEE Symposium on Security and Privacy (SP), pp.3-18, 2017.
- Kurakin A, Goodfellow I, Bengio S, "Adversarial examples in the physical world", arXiv preprint arXiv:1607.02533, 2016.
- Carlini N, Wagner D, "Towards evaluating the robustness of neural networks", 2017 IEEE Symposium on Security and Privacy (SP), pp.39-57, 2017.
- Elsayed G, Goodfellow I, Sohl-Dickstein J, "Adversarial reprogramming of neural networks", arXiv:1806.11146, 2018.
- Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T, "Stealing machine learning models via prediction apis", 25th {USENIX} Security Symposium ({USENIX} Security 16), pp.601-618, 2016.
- Fredrikson M, Jha S, Ristenpart T, "Model inversion attacks that exploit confidence information and basic countermeasures", In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp.1322-1333, 2015.
- Liu Y, Ma S, Aafer Y, Lee WC, Zhai J, Wang W, Zhang X, "Trojaning attack on neural networks", 2017.
- Gu T, Dolan-Gavitt B, Garg S, "Badnets: Identifying vulnerabilities in the machine learning model supply chain", arXiv preprint arXiv:1708.06733, 2017.
- Liu K, Dolan-Gavitt B, Garg S, "Fine-pruning: Defending against backdooring attacks on deep neural networks", In International Symposium on Research in Attacks, Intrusions, and Defenses, pp.273-294, Springer, Cham, 2018.