Figure 1. Proposed Method Flowchart
Figure 2. Image Comparison: (a) Original images (b) Images with adversarial noise (c) Denoised with proposed method
Table 1. Percentage of classification accuracy
References
- A.S. Rakin, Z. He, B. Gong, and D. Fan, "Blind Pre-Processing: A Robust Defense Method Against Adversarial Examples". arXiv preprint arXiv:1802.01549, 2018.
- C. Guo, M. Rana, M. Cisse, and L.V. Maaten, "Countering Adversarial Images using Input Transformations". arXiv preprint arXiv:1711.00117, 2017.
- I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," arXiv preprint arXiv:1412.6572, 2014.
- C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus, "Intriguing properties of neural networks". arXiv preprint arXiv:1312.6199, 2013.
- I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio, "Generative Adversarial Nets". Neural Information Processing System (NIPS), 2014.
- O. Ronneberger, P. Fischer, and T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation". Medical Image Computing and Computer Assisted Intervention (MICCAI), 2015.
- P. Samangouei, M. Kabkab, and R. Chellappa, "Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models". arXiv preprint arXiv:1805.06605, 2017.
- L. Deng, "The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]". IEEE Signal Processing Magazine, Vol 29, pp 141-142, 2012. https://doi.org/10.1109/MSP.2012.2211477
- Perhaps the Simplest Introduction of Adversarial Examples Ever. https://towardsdatascience.com/perhaps-the-simplest-introduction-of-adversarial-examples-ever-c0839a759b8d