Fig. 1. Neural Network
Fig. 2. Differential Privacy[7]
Fig. 3. Centralized and Distributed training model
Fig. 4. GAN attack
Fig. 5. Model Extraction Attack
References
- M. Ribeiro, K. Grolinger & M. A. M. Capretz. (2015). MLaaS: Machine Learning as a Service. In IEEE International Conference on Machine Learning and Applications (ICMLA), p. 896-902.
- M. Fredrikson, S. Jha & T. Ristenpart. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. In CCS, (pp. 1322-1333). USA : ACM.
- A. Krizhevsky, I. Sutskever & G. E Hinton. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
- S. Hochreiter & J. Schmidhuber. (1997). Long short-term memory. Neural computation 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
- B Hitaj, G Ateniese & F Perez-Cruz. (2017). Deep Models under the GAN: Information Leakage from Collaborative Deep Learning. Proc. (pp. 603-618). ACM CCS.
- C. Dwork & A. Roth. (2013). The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3-4), 211-407.
- K. Ligett. (2017). Introduction to differential privacy, randomized response, basic properties. The 7th BIU Winter School on Cryptography, BIU.
- C. Gentry. (2009). A fully homomorphic encryption scheme. PhD thesis, Stanford University, California
- P Martins, L Sousa & A Mariano. (2018). A survey on fully homomorphic encryption: An engineering perspective. ACM Computing Surveys (CSUR), 50(6), 83.
- Y. Lindell & B. Pinkas. (2008). Secure multiparty computation for privacy-preserving data mining. IACR Cryptology ePrint Archive 197.
- H. Bae, J. Jang, D. Jung, H. Jang, H. Ha & S. Yoon. (2018). Security and Privacy Issues in Deep Learning. ACM Computing Surveys
- S. Chang & C. Li. (2018). Privacy in Neural Network Learning: Threats and Countermeasures. IEEE Network, 32(4), 61-67. https://doi.org/10.1109/mnet.2018.1700447
- R. Shokri, M. Stronati, C. Song & V. Shmatikov. (2017). Membership Inference Attacks against Machine Learning Models. IEEE Sym. SP, p. 3-18.
- F. Tramer, F. Zhang, A. Juels, M.K. Reiter & T. Ristenpart. (2016). Stealing Machine Learning Models via Prediction APIs. USENIX Sec. Sym. (pp. 601-618). Vancouver : USENIX
- P. Mohassel & Y. Zhang. (2017). SecureML: A System for Scalable Privacy preserving Machine Learning. IEEE Sym. SP, p. 19-38.
- L. Xie, K. Lin, S. Wang, F. Wang & J. Zhou. (2018). Differentially Private Generative Adversarial Network. arXiv preprint arXiv:1802.06739.
- J. Yuan & S. Yu. (2014). Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing. IEEE Trans. PDS, p. 212-221.
- P. Li et al. (2017). Multi-Key Privacy-Preserving Deep Learning in Cloud Computing. Future Generation Computer Systems, 74, 76-85. https://doi.org/10.1016/j.future.2017.02.006
- M. Abadi et al. (2016). Deep Learning with Differential Privacy. Proc. ACM CCS, (pp. 308-318). ACM : Vienna
- G. Acs, L. Melis, C. Castelluccia & E. De Cristofaro. (2017). Differentially private mixture of generative neural networks. IEEE Transactions on Knowledge and Data Engineering, 31(6), 1109-1121. https://doi.org/10.1109/tkde.2018.2855136
- C. Dwork & G. N. Rothblum. (2016). Concentrated differential privacy. CoRR, abs/1603.01887.
- L. Yu, L. Liu, C. Pu, M. E. Gursoy & S. Truex. (2019). Differentially Private Model Publishing for Deep Learning. IEEE.
- X. Zhang, S. Ji, H. Wang & T. Wang (2017). Private, Yet Practical, Multiparty Deep Learning. ICDCS, pp. 1442-52. IEEE.
- K. Bonawitz et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. Cryptology ePrint Archive, (pp. 1175-1191). ACM.