Gain without pain: offsetting DP-injected nosies stealthily in cross-device federated learning

Wenzhuo Yang, Yipeng Zhou, Miao Hu, Di Wu*, Xi Zheng, Jessie Hui Wang, Song Guo, Chao Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Federated Learning (FL) is an emerging paradigm through which decentralized devices can collaboratively train a common model. However, a serious concern is the leakage of privacy from exchanged gradient information between clients and the parameter server (PS) in FL. To protect gradient information, clients can adopt differential privacy (DP) to add additional noises and distort original gradients before they are uploaded to the PS. Nevertheless, the model accuracy will be significantly impaired by DP noises, making DP impracticable in real systems. In this work, we propose a novel Noise Information Secretly Sharing (NISS) algorithm to alleviate the disturbance of DP noises by sharing negated noises among clients. We theoretically prove that: 1) If clients are trustworthy, DP noises can be perfectly offset on the PS; 2) Clients can easily distort negated DP noises to protect themselves in case that other clients are not totally trustworthy, though the cost lowers model accuracy. NISS is particularly applicable for FL across multiple IoT (Internet of Things) systems, in which all IoT devices need to collaboratively train a model. To verify the effectiveness and the superiority of the NISS algorithm, we conduct experiments with the MNIST and CIFAR-10 datasets. The experiment results verify our analysis and demonstrate that NISS can improve model accuracy by 19% on average and obtain better privacy protection if clients are trustworthy.

Original languageEnglish
JournalIEEE Internet of Things Journal
DOIs
Publication statusE-pub ahead of print - 3 Aug 2021

Bibliographical note

Publisher Copyright:
IEEE

Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.

Keywords

  • Computational modeling
  • Differential Privacy
  • Differential privacy
  • Distortion
  • Federated Learning
  • Internet of Things
  • Machine learning
  • Privacy
  • Secretly Offsetting.
  • Training

Fingerprint

Dive into the research topics of 'Gain without pain: offsetting DP-injected nosies stealthily in cross-device federated learning'. Together they form a unique fingerprint.

Cite this