TY - JOUR
T1 - An optimized sparse response mechanism for differentially private federated learning
AU - Ma, Jiating
AU - Zhou, Yipeng
AU - Cui, Laizhong
AU - Guo, Song
PY - 2024
Y1 - 2024
N2 - Federated Learning (FL) enables geo-distributed clients to collaboratively train a learning model without exposing their private data. By only exposing local model parameters, FL well preserves data privacy of clients. Yet, it remains possible to recover raw samples from over frequently exposed parameters resulting in privacy leakage. Differentially private federated learning (DPFL) has recently been suggested to protect these parameters by introducing information noises. In this way, even if attackers get these parameters, they cannot exactly infer true parameters from these noisy information. Directly incorporating Differentially Private (DP) into FL however can severely affect model utility. In this article, we present an optimized sparse response mechanism (OSRM) that seamlessly incorporates DP into FL to diminish privacy budget consumption and improve model accuracy. Through OSRM, each FL client only exposes a selected set of large gradients, so as not to waste privacy budgets in protecting valueless gradients. We theoretically derive the convergence rate of DPFL with OSRM under non-convex loss. Then, OSRM is optimized by minimizing the loss of the convergence rate. Based on analysis, we present an effective algorithm for optimizing OSRM. Extensive experiments are conducted with public datasets, including MNIST, Fashion-MNIST and CIFAR-10. The results suggest that OSRM can achieve the average improvement of accuracy by 18.42% as compared to state-of-the-art baselines with a fixed privacy budget.
AB - Federated Learning (FL) enables geo-distributed clients to collaboratively train a learning model without exposing their private data. By only exposing local model parameters, FL well preserves data privacy of clients. Yet, it remains possible to recover raw samples from over frequently exposed parameters resulting in privacy leakage. Differentially private federated learning (DPFL) has recently been suggested to protect these parameters by introducing information noises. In this way, even if attackers get these parameters, they cannot exactly infer true parameters from these noisy information. Directly incorporating Differentially Private (DP) into FL however can severely affect model utility. In this article, we present an optimized sparse response mechanism (OSRM) that seamlessly incorporates DP into FL to diminish privacy budget consumption and improve model accuracy. Through OSRM, each FL client only exposes a selected set of large gradients, so as not to waste privacy budgets in protecting valueless gradients. We theoretically derive the convergence rate of DPFL with OSRM under non-convex loss. Then, OSRM is optimized by minimizing the loss of the convergence rate. Based on analysis, we present an effective algorithm for optimizing OSRM. Extensive experiments are conducted with public datasets, including MNIST, Fashion-MNIST and CIFAR-10. The results suggest that OSRM can achieve the average improvement of accuracy by 18.42% as compared to state-of-the-art baselines with a fixed privacy budget.
UR - http://www.scopus.com/inward/record.url?scp=85167827558&partnerID=8YFLogxK
U2 - 10.1109/TDSC.2023.3302864
DO - 10.1109/TDSC.2023.3302864
M3 - Article
AN - SCOPUS:85167827558
SN - 1545-5971
VL - 21
SP - 2285
EP - 2295
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 4
ER -