TY - JOUR
T1 - FedDP-SA
T2 - boosting differentially private federated learning via local dataset splitting
AU - Liu, Xuezheng
AU - Zhou, Yipeng
AU - Wu, Di
AU - Hu, Miao
AU - Wang, Jessie Hui
AU - Guizani, Mohsen
PY - 2024/10/1
Y1 - 2024/10/1
N2 - Federated learning (FL) emerges as an attractive collaborative machine learning framework that enables training of models across decentralized devices by merely exposing model parameters. However, malicious attackers can still hijack communicated parameters to expose clients' raw samples resulting in privacy leakage. To defend against such attacks, differentially private FL (DPFL) is devised, which incurs negligible computation overhead in protecting privacy by adding noises. Nevertheless, the low model utility and communication efficiency makes DPFL hard to be deployed in the real environment. To overcome these deficiencies, we propose a novel DPFL algorithm called FedDP-SA (namely, federated learning with differential privacy by splitting Local data sets and averaging parameters). Specifically, FedDP-SA splits a local data set into multiple subsets for parameter updating. Then, parameters averaged over all subsets plus differential privacy (DP) noises are returned to the parameter server. FedDP-SA offers dual benefits: 1) enhancing model accuracy by efficiently lowering sensitivity, thereby reducing noise to ensure DP and 2) improving communication efficiency by communicating model parameters with a lower frequency. These advantages are validated through sensitivity analysis and convergence rate analysis. Finally, we conduct comprehensive experiments to verify the performance of FedDP-SA compared with other state-of-the-art baseline algorithms.
AB - Federated learning (FL) emerges as an attractive collaborative machine learning framework that enables training of models across decentralized devices by merely exposing model parameters. However, malicious attackers can still hijack communicated parameters to expose clients' raw samples resulting in privacy leakage. To defend against such attacks, differentially private FL (DPFL) is devised, which incurs negligible computation overhead in protecting privacy by adding noises. Nevertheless, the low model utility and communication efficiency makes DPFL hard to be deployed in the real environment. To overcome these deficiencies, we propose a novel DPFL algorithm called FedDP-SA (namely, federated learning with differential privacy by splitting Local data sets and averaging parameters). Specifically, FedDP-SA splits a local data set into multiple subsets for parameter updating. Then, parameters averaged over all subsets plus differential privacy (DP) noises are returned to the parameter server. FedDP-SA offers dual benefits: 1) enhancing model accuracy by efficiently lowering sensitivity, thereby reducing noise to ensure DP and 2) improving communication efficiency by communicating model parameters with a lower frequency. These advantages are validated through sensitivity analysis and convergence rate analysis. Finally, we conduct comprehensive experiments to verify the performance of FedDP-SA compared with other state-of-the-art baseline algorithms.
UR - http://www.scopus.com/inward/record.url?scp=85197494077&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2024.3421991
DO - 10.1109/JIOT.2024.3421991
M3 - Article
AN - SCOPUS:85197494077
SN - 2327-4662
VL - 11
SP - 31687
EP - 31698
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 19
ER -