An optimized sparse response mechanism for differentially private federated learning

Jiating Ma, Yipeng Zhou, Laizhong Cui, Song Guo

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Federated Learning (FL) enables geo-distributed clients to collaboratively train a learning model without exposing their private data. By only exposing local model parameters, FL well preserves data privacy of clients. Yet, it remains possible to recover raw samples from over frequently exposed parameters resulting in privacy leakage. Differentially private federated learning (DPFL) has recently been suggested to protect these parameters by introducing information noises. In this way, even if attackers get these parameters, they cannot exactly infer true parameters from these noisy information. Directly incorporating Differentially Private (DP) into FL however can severely affect model utility. In this article, we present an optimized sparse response mechanism (OSRM) that seamlessly incorporates DP into FL to diminish privacy budget consumption and improve model accuracy. Through OSRM, each FL client only exposes a selected set of large gradients, so as not to waste privacy budgets in protecting valueless gradients. We theoretically derive the convergence rate of DPFL with OSRM under non-convex loss. Then, OSRM is optimized by minimizing the loss of the convergence rate. Based on analysis, we present an effective algorithm for optimizing OSRM. Extensive experiments are conducted with public datasets, including MNIST, Fashion-MNIST and CIFAR-10. The results suggest that OSRM can achieve the average improvement of accuracy by 18.42% as compared to state-of-the-art baselines with a fixed privacy budget.

Original languageEnglish
Pages (from-to)2285-2295
Number of pages11
JournalIEEE Transactions on Dependable and Secure Computing
Volume21
Issue number4
Early online date7 Aug 2023
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'An optimized sparse response mechanism for differentially private federated learning'. Together they form a unique fingerprint.

Cite this