DOI QR코드

DOI QR Code

Privacy-preserving Federated Learning via Selective Encryption

선택적 암호화를 통한 프라이버시 보호 연합 학습

  • Younghan Lee (Dept. of Convergence Security Engineering, Sungshin Women's University)
  • 이영한 (성신여자대학교 융합보안공학과)
  • Published : 2024.10.31

Abstract

We introduce a novel method to defend against model inversion attacks in Federated Learning (FL). FL enables the training of a global model by sharing local gradients without sharing clients' private data. However, model inversion attacks can reconstruct the data from the shared gradients. Traditional defense mechanisms, such as Differential Privacy (DP) and Homomorphic Encryption (HE), have limitations in balancing privacy and model accuracy. Our approach selectively encrypts more important gradients, which contain more information about the training data, to balance between privacy and computational efficiency. Additionally, optional DP noise is applied to unencrypted gradients for enhanced security. Comprehensive evaluations demonstrate that our method significantly improves both privacy and model accuracy compared to existing defenses.

Keywords

References

  1. Gentry, C. "Fully Homomorphic Encryption Using Ideal Lattices." Proceedings of the 41st Annual ACM Symposium on Theory of Computing, ACM, 2009, pp. 169-178.
  2. Dwork, C., Roth, A., et al. "The Algorithmic Foundations of Differential Privacy." Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3-4, 2014, pp. 211-407.
  3. Geiping, J., Bauermeister, H., Droge, H., and Moeller, M. "Inverting Gradients: How Easy Is It to Break Privacy in Federated Learning?" Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 16937-16947.
  4. Sun, J., Li, A., Wang, B., Yang, H., Li, H., and Chen, Y. "Soteria: Provable Defense Against Privacy Leakage in Federated Learning from Representation Perspective." IEEE Conference on Computer Vision and Patte rn Recognition (CVPR).
  5. Wang, F., Hugh, E., & Li, B. (2024). More than enough is too much: Adaptive defenses against gradient leakage in production federated learning. IEEE/ACM Transactions on Networking.