Analysis of privacy issues and countermeasures in neural network learning

신경망 학습에서 프라이버시 이슈 및 대응방법 분석

  • Hong, Eun-Ju (Dept. of Convergence Science, Kongju National University) ;
  • Lee, Su-Jin (Dept. of Mathematics, Kongju National University) ;
  • Hong, Do-won (Dept. of Applied Mathematics, Kongju National University) ;
  • Seo, Chang-Ho (Dept. of Applied Mathematics, Kongju National University)
  • Received : 2019.04.11
  • Accepted : 2019.07.20
  • Published : 2019.07.28


With the popularization of PC, SNS and IoT, a lot of data is generated and the amount is increasing exponentially. Artificial neural network learning is a topic that attracts attention in many fields in recent years by using huge amounts of data. Artificial neural network learning has shown tremendous potential in speech recognition and image recognition, and is widely applied to a variety of complex areas such as medical diagnosis, artificial intelligence games, and face recognition. The results of artificial neural networks are accurate enough to surpass real human beings. Despite these many advantages, privacy problems still exist in artificial neural network learning. Learning data for artificial neural network learning includes various information including personal sensitive information, so that privacy can be exposed due to malicious attackers. There is a privacy risk that occurs when an attacker interferes with learning and degrades learning or attacks a model that has completed learning. In this paper, we analyze the attack method of the recently proposed neural network model and its privacy protection method.


Artificial Neural Network;Privacy;Differential Privacy;Homomorphic Encryption;attack

DJTJBT_2019_v17n7_285_f0001.png 이미지

Fig. 1. Neural Network

DJTJBT_2019_v17n7_285_f0002.png 이미지

Fig. 2. Differential Privacy[7]

DJTJBT_2019_v17n7_285_f0003.png 이미지

Fig. 3. Centralized and Distributed training model

DJTJBT_2019_v17n7_285_f0004.png 이미지

Fig. 4. GAN attack

DJTJBT_2019_v17n7_285_f0005.png 이미지

Fig. 5. Model Extraction Attack


Supported by : National Research Foundation of Korea(NRF)


  1. M. Ribeiro, K. Grolinger & M. A. M. Capretz. (2015). MLaaS: Machine Learning as a Service. In IEEE International Conference on Machine Learning and Applications (ICMLA), p. 896-902.
  2. M. Fredrikson, S. Jha & T. Ristenpart. (2015). Model inversion attacks that exploit confidence information and basic countermeasures. In CCS, (pp. 1322-1333). USA : ACM.
  3. A. Krizhevsky, I. Sutskever & G. E Hinton. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
  4. S. Hochreiter & J. Schmidhuber. (1997). Long short-term memory. Neural computation 9(8), 1735-1780.
  5. B Hitaj, G Ateniese & F Perez-Cruz. (2017). Deep Models under the GAN: Information Leakage from Collaborative Deep Learning. Proc. (pp. 603-618). ACM CCS.
  6. C. Dwork & A. Roth. (2013). The algorithmic foundations of differential privacy. Theoretical Computer Science, 9(3-4), 211-407.
  7. K. Ligett. (2017). Introduction to differential privacy, randomized response, basic properties. The 7th BIU Winter School on Cryptography, BIU.
  8. C. Gentry. (2009). A fully homomorphic encryption scheme. PhD thesis, Stanford University, California
  9. P Martins, L Sousa & A Mariano. (2018). A survey on fully homomorphic encryption: An engineering perspective. ACM Computing Surveys (CSUR), 50(6), 83.
  10. Y. Lindell & B. Pinkas. (2008). Secure multiparty computation for privacy-preserving data mining. IACR Cryptology ePrint Archive 197.
  11. H. Bae, J. Jang, D. Jung, H. Jang, H. Ha & S. Yoon. (2018). Security and Privacy Issues in Deep Learning. ACM Computing Surveys
  12. S. Chang & C. Li. (2018). Privacy in Neural Network Learning: Threats and Countermeasures. IEEE Network, 32(4), 61-67.
  13. R. Shokri, M. Stronati, C. Song & V. Shmatikov. (2017). Membership Inference Attacks against Machine Learning Models. IEEE Sym. SP, p. 3-18.
  14. F. Tramer, F. Zhang, A. Juels, M.K. Reiter & T. Ristenpart. (2016). Stealing Machine Learning Models via Prediction APIs. USENIX Sec. Sym. (pp. 601-618). Vancouver : USENIX
  15. P. Mohassel & Y. Zhang. (2017). SecureML: A System for Scalable Privacy preserving Machine Learning. IEEE Sym. SP, p. 19-38.
  16. L. Xie, K. Lin, S. Wang, F. Wang & J. Zhou. (2018). Differentially Private Generative Adversarial Network. arXiv preprint arXiv:1802.06739.
  17. J. Yuan & S. Yu. (2014). Privacy Preserving Back-Propagation Neural Network Learning Made Practical with Cloud Computing. IEEE Trans. PDS, p. 212-221.
  18. P. Li et al. (2017). Multi-Key Privacy-Preserving Deep Learning in Cloud Computing. Future Generation Computer Systems, 74, 76-85.
  19. M. Abadi et al. (2016). Deep Learning with Differential Privacy. Proc. ACM CCS, (pp. 308-318). ACM : Vienna
  20. G. Acs, L. Melis, C. Castelluccia & E. De Cristofaro. (2017). Differentially private mixture of generative neural networks. IEEE Transactions on Knowledge and Data Engineering, 31(6), 1109-1121.
  21. C. Dwork & G. N. Rothblum. (2016). Concentrated differential privacy. CoRR, abs/1603.01887.
  22. L. Yu, L. Liu, C. Pu, M. E. Gursoy & S. Truex. (2019). Differentially Private Model Publishing for Deep Learning. IEEE.
  23. X. Zhang, S. Ji, H. Wang & T. Wang (2017). Private, Yet Practical, Multiparty Deep Learning. ICDCS, pp. 1442-52. IEEE.
  24. K. Bonawitz et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. Cryptology ePrint Archive, (pp. 1175-1191). ACM.