Abstract
Federated learning (FL) emerges as an attractive collaborative machine learning framework that enables training of models across decentralized devices by merely exposing model parameters. However, malicious attackers can still hijack communicated parameters to expose clients’ raw samples resulting in privacy leakage. To defend against such attacks, differentially private federated learning (DPFL) is devised, which incurs negligible computation overhead in protecting privacy by adding noises. Nevertheless, the low model utility and communication efficiency makes DPFL hard to be deployed in the real environment. To overcome these deficiencies, we propose a novel DPFL algorithm called FedDP-SA (namely, Federated Learning with Differential Privacy by Splitting Local Datasets and Averaging Parameters). Specifically, FedDP-SA splits a local dataset into multiple subsets for parameter updating. Then, parameters averaged over all subsets plus DP noises are returned to the PS. FedDP-SA offers dual benefits: 1) enhancing model accuracy by efficiently lowering sensitivity, thereby reducing noise to ensure differential privacy; 2) improving communication efficiency by communicating model parameters with a lower frequency. These advantages are validated through sensitivity analysis and convergence rate analysis. Finally, we conduct comprehensive experiments to verify the performance of FedDP-SA compared with other state-of-the-art baseline algorithms.
Original language | English |
---|---|
Number of pages | 12 |
Journal | IEEE Internet of Things Journal |
DOIs | |
Publication status | E-pub ahead of print - 2 Jul 2024 |
Keywords
- Accuracy
- Computational modeling
- Data models
- data splitting
- Differential privacy
- Federated learning
- gaussian mechanism
- Internet of Things
- Noise
- Privacy
- sensitivity and convergence rate