Abstract
Federated learning (FL) enables clients to learn a machine learning model collaboratively without sharing their private local data to the server. However, due to its distributed structure, FL is vulnerable to poisoning attacks where adversaries intentionally send the poisoned local model parameters to the server and further affect the behavior of the global model. Existing works on mitigating poisoning attacks in FL are difficult to accurately characterize data relationships of high-dimensional parameters. Moreover, they cannot guarantee Byzantine robustness when the majority of clients are compromised (Byzantine ratio) and clients’ local datasets are highly non-independent and identically distributed (non-IID). In this paper, we conduct a pioneering work to introduce deep one-class classification into mitigating poisoning attacks in FL, which can guarantee Byzantine robustness even under a Byzantine ratio and non-IID degree greater than 0.5 with the assistance of a root dataset. Our key idea is to adequately learn vector features of benign local model parameters using Deep Support Vector Data Description (Deep SVDD) and achieve the optimal classification by training a deep learning-based one-class classifier equipped with a proper decision boundary based on the root dataset. To further optimize the classifier, we employ a regularizer based on random noise injections, which can address the hypersphere collapse problem inherent in Deep SVDD. Exhaustive experiments on MNIST, F-MNIST, and CIFAR-10 demonstrate that compared with five typical Byzantine-robust methods, our defense strategy achieves excellent effectiveness in mitigating targeted/untargeted poisoning attacks and an adaptive attack in FL. Even with a similar but different root dataset, it still maintains a good Byzantine robustness.
| Original language | English |
|---|---|
| Pages (from-to) | 545-558 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Cognitive Communications and Networking |
| Volume | 12 |
| Early online date | 25 Apr 2025 |
| DOIs | |
| Publication status | Published - 2026 |
Fingerprint
Dive into the research topics of 'Mitigating poisoning attacks in federated learning through deep one-class classification'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver