TY - JOUR
T1 - Exploring the practicality of differentially private federated learning
T2 - a local iteration tuning approach
AU - Zhou, Yipeng
AU - Wang, Runze
AU - Liu, Jiahao
AU - Wu, Di
AU - Yu, Shui
AU - Wen, Yonggang
PY - 2024
Y1 - 2024
N2 - Although Federated Learning (FL) prevents the exposure of original data samples when collaboratively training machine learning models among decentralized clients, it has been revealed that vanilla FL is still susceptible to adversarial attacks if model parameters are leaked to malicious attackers. To enhance the protection level of FL, Differential Private Federated Learning (DPFL) has been proposed in recent years. DPFL injects zero-mean noises randomly generated by differential private (DP) mechanisms on local model parameters before they are disclosed. Nevertheless, DP noises can significantly deteriorate model utility jeopardizing the practicality of DPFL. In this article, we are among the first to explore how to improve the model utility of DPFL by tuning the number of local iterations (LIs) on DPFL clients. Our work shows that such a local iteration tuning approach can well mitigate the adverse influence of DP noises on the final model utility. Formally, we derive the sensitivity (a measure of the maximum change of the output given two adjacent inputs) with respect to the number of LIs conducted on DPFL clients for the Laplace mechanism, and the aggregated variances of Laplace noises at the server side. We further conduct convergence rate analysis to quantify the influence of the Laplace noises on the final model accuracy and determine how to optimally set the number of LIs. Finally, to verify our theoretical findings, we perform extensive experiments using three real-world datasets, namely, Lending Club, MNIST and Fashion-MNIST. The results not only corroborate our analysis, but also demonstrate that our approach significantly improves the practicality of DPFL.
AB - Although Federated Learning (FL) prevents the exposure of original data samples when collaboratively training machine learning models among decentralized clients, it has been revealed that vanilla FL is still susceptible to adversarial attacks if model parameters are leaked to malicious attackers. To enhance the protection level of FL, Differential Private Federated Learning (DPFL) has been proposed in recent years. DPFL injects zero-mean noises randomly generated by differential private (DP) mechanisms on local model parameters before they are disclosed. Nevertheless, DP noises can significantly deteriorate model utility jeopardizing the practicality of DPFL. In this article, we are among the first to explore how to improve the model utility of DPFL by tuning the number of local iterations (LIs) on DPFL clients. Our work shows that such a local iteration tuning approach can well mitigate the adverse influence of DP noises on the final model utility. Formally, we derive the sensitivity (a measure of the maximum change of the output given two adjacent inputs) with respect to the number of LIs conducted on DPFL clients for the Laplace mechanism, and the aggregated variances of Laplace noises at the server side. We further conduct convergence rate analysis to quantify the influence of the Laplace noises on the final model accuracy and determine how to optimally set the number of LIs. Finally, to verify our theoretical findings, we perform extensive experiments using three real-world datasets, namely, Lending Club, MNIST and Fashion-MNIST. The results not only corroborate our analysis, but also demonstrate that our approach significantly improves the practicality of DPFL.
UR - http://www.scopus.com/inward/record.url?scp=85174858923&partnerID=8YFLogxK
U2 - 10.1109/TDSC.2023.3325889
DO - 10.1109/TDSC.2023.3325889
M3 - Article
AN - SCOPUS:85174858923
SN - 1545-5971
VL - 21
SP - 3280
EP - 3294
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 4
ER -