Mitigating poisoning attacks in federated learning through deep one-class classification

Anqi Zhang, Ping Zhao, Wenke Lu, Yipeng Zhou, Wenqian Zhang*, Guanglin Zhang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Federated learning (FL) enables clients to learn a machine learning model collaboratively without sharing their private local data to the server. However, due to its distributed structure, FL is vulnerable to poisoning attacks where adversaries intentionally send the poisoned local model parameters to the server and further affect the behavior of the global model. Existing works on mitigating poisoning attacks in FL are difficult to accurately characterize data relationships of high-dimensional parameters. Moreover, they cannot guarantee Byzantine robustness when the majority of clients are compromised (Byzantine ratio) and clients’ local datasets are highly non-independent and identically distributed (non-IID). In this paper, we conduct a pioneering work to introduce deep one-class classification into mitigating poisoning attacks in FL, which can guarantee Byzantine robustness even under a Byzantine ratio and non-IID degree greater than 0.5 with the assistance of a root dataset. Our key idea is to adequately learn vector features of benign local model parameters using Deep Support Vector Data Description (Deep SVDD) and achieve the optimal classification by training a deep learning-based one-class classifier equipped with a proper decision boundary based on the root dataset. To further optimize the classifier, we employ a regularizer based on random noise injections, which can address the hypersphere collapse problem inherent in Deep SVDD. Exhaustive experiments on MNIST, F-MNIST, and CIFAR-10 demonstrate that compared with five typical Byzantine-robust methods, our defense strategy achieves excellent effectiveness in mitigating targeted/untargeted poisoning attacks and an adaptive attack in FL. Even with a similar but different root dataset, it still maintains a good Byzantine robustness.

Original languageEnglish
JournalIEEE Transactions on Cognitive Communications and Networking
Early online date25 Apr 2025
DOIs
Publication statusE-pub ahead of print - 25 Apr 2025

Keywords

  • Byzantine robustness
  • deep one-class classification
  • Federated learning
  • poisoning attacks

Cite this