과제정보
본 논문은 화랑대연구소의 2021년도(21-군학-3) 저술활동비 지원을 받아 연구되었음.
참고문헌
- J. Schmidhuber, "Deep learning in neural networks: An overview," Neural Netw., vol. 61, pp. 85-117, Jan. 2015. https://doi.org/10.1016/j.neunet.2014.09.003
- Kleesiek, Jens, et al. "Deep MRI brain extraction: A 3D convolutional neural network for skull stripping." NeuroImage 129 (2016): 460-469. https://doi.org/10.1016/j.neuroimage.2016.01.024
- Barreno, Marco, et al. "The security of machine learning." Machine Learning 81.2 (2010): 121-148. https://doi.org/10.1007/s10994-010-5188-5
- Biggio, Battista, Blaine Nelson, and Pavel Laskov. "Poisoning attacks against support vector machines." arXiv preprint arXiv:1206.6389 (2012).
- C. Szegedy,W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," in Proc. 2nd Int. Conf. Learn. Represent. (ICLR), Banff, AB, Canada, Apr. 2014.
- He, Warren, et al. "Adversarial example defense: Ensembles of weak defenses are not strong." 11th {USENIX} Workshop on Offensive Technologies ({WOOT} 17). 2017.
- Xu, Weilin, David Evans, and Yanjun Qi. "Feature squeezing: Detecting adversarial examples in deep neural networks." arXiv preprint arXiv:1704.01155 (2017).
- Tramer, Florian, et al. "Ensemble adversarial training: Attacks and defenses." arXiv preprint arXiv:1705.07204 (2017).
- Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. "Adversarial machine learning at scale." arXiv preprint arXiv:1611.01236 (2016).
- Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, and Pascal Frossard. "Deepfool: a simple and accurate method to fool deep neural networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
- Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." 2017 ieee symposium on security and privacy (sp). IEEE, 2017.
- Y. LeCun, C. Cortes, and C. J. Burges. (2010). Mnist Handwritten Digit Database. AT&T Labs. [Online]. Available: http://yann.lecun.com/exdb/mnist
- Papernot, Nicolas, et al. "Distillation as a defense to adversarial perturbations against deep neural networks." 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 2016.
- Nasr, George E., E. A. Badr, and C. Joun. "Cross entropy error function in neural networks: Forecasting gasoline demand." FLAIRS conference. 2002.
- Abadi, Martin, et al. "Tensorflow: A system for large-scale machine learning." 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16). 2016.
- Li, Jiahao, et al. "Fully connected network-based intra prediction for image coding." IEEE Transactions on Image Processing 27.7 (2018): 3236-3247. https://doi.org/10.1109/TIP.2018.2817044
- Kwon, Hyun, et al. "Classification score approach for detecting adversarial example in deep neural network." Multimedia Tools and Applications 80.7 (2021): 10339-10360. https://doi.org/10.1007/s11042-020-09167-z
- Kwon, Hyun, et al. "Selective audio adversarial example in evasion attack on speech recognition system." IEEE Transactions on Information Forensics and Security 15 (2019): 526-538. https://doi.org/10.1109/tifs.2019.2925452
- Kwon, Hyun. "Friend-Guard Textfooler Attack on Text Classification System." IEEE Access (2021).
- Kwon, Hyun. "Detecting Backdoor Attacks via Class Difference in Deep Neural Networks." IEEE Access 8 (2020): 191049-191056. https://doi.org/10.1109/access.2020.3032411
- Kwon, Hyun, Hyunsoo Yoon, and Ki-Woong Park. "Multi-targeted backdoor: Indentifying backdoor attack for multiple deep neural networks." IEICE Transactions on Information and Systems 103.4 (2020): 883-887.