References
- "Poison attacks against machine learning, Security and spam-detection programs could be affected", The Kurzweil Accelerating Intelligence , July, 2012
- Mozaffari-Kermani, Mehran, et al. "Systematic poisoning attacks on and defenses for machine learning in healthcare." IEEE journal of biomedical and health informatics, 19.6, 1893-1905, 2015 https://doi.org/10.1109/JBHI.2014.2344095
- Szegedy, Christian, et al. "Intriguing properties of neural networks." arXiv preprint arXiv, 1312.6199, 2013.
- T. Vaidya, Y. Zhang, M. Sherr, and C. Shields, "Cocaine noodles:exploiting the gap between human and machine speech recognition," in 9th USENIX Workshop on Offensive Technologies (WOOT 15), 2015
- Tramèr, Florian, et al. "Stealing machine learning models via prediction apis." USENIX Security. 2016.
- Fredrikson, Matt, Somesh Jha, and Thomas Ristenpart. "Model inversion attacks that exploit confidence information and basic countermeasures." Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 2015.
- https://en.wikipedia.org/wiki/Sanitization_(classi fied_information)