TY - JOUR
T1 - Mitigating poisoning attacks in federated learning through deep one-class classification
AU - Zhang, Anqi
AU - Zhao, Ping
AU - Lu, Wenke
AU - Zhou, Yipeng
AU - Zhang, Wenqian
AU - Zhang, Guanglin
PY - 2025/4/25
Y1 - 2025/4/25
N2 - Federated learning (FL) enables clients to learn a machine learning model collaboratively without sharing their private local data to the server. However, due to its distributed structure, FL is vulnerable to poisoning attacks where adversaries intentionally send the poisoned local model parameters to the server and further affect the behavior of the global model. Existing works on mitigating poisoning attacks in FL are difficult to accurately characterize data relationships of high-dimensional parameters. Moreover, they cannot guarantee Byzantine robustness when the majority of clients are compromised (Byzantine ratio) and clients’ local datasets are highly non-independent and identically distributed (non-IID). In this paper, we conduct a pioneering work to introduce deep one-class classification into mitigating poisoning attacks in FL, which can guarantee Byzantine robustness even under a Byzantine ratio and non-IID degree greater than 0.5 with the assistance of a root dataset. Our key idea is to adequately learn vector features of benign local model parameters using Deep Support Vector Data Description (Deep SVDD) and achieve the optimal classification by training a deep learning-based one-class classifier equipped with a proper decision boundary based on the root dataset. To further optimize the classifier, we employ a regularizer based on random noise injections, which can address the hypersphere collapse problem inherent in Deep SVDD. Exhaustive experiments on MNIST, F-MNIST, and CIFAR-10 demonstrate that compared with five typical Byzantine-robust methods, our defense strategy achieves excellent effectiveness in mitigating targeted/untargeted poisoning attacks and an adaptive attack in FL. Even with a similar but different root dataset, it still maintains a good Byzantine robustness.
AB - Federated learning (FL) enables clients to learn a machine learning model collaboratively without sharing their private local data to the server. However, due to its distributed structure, FL is vulnerable to poisoning attacks where adversaries intentionally send the poisoned local model parameters to the server and further affect the behavior of the global model. Existing works on mitigating poisoning attacks in FL are difficult to accurately characterize data relationships of high-dimensional parameters. Moreover, they cannot guarantee Byzantine robustness when the majority of clients are compromised (Byzantine ratio) and clients’ local datasets are highly non-independent and identically distributed (non-IID). In this paper, we conduct a pioneering work to introduce deep one-class classification into mitigating poisoning attacks in FL, which can guarantee Byzantine robustness even under a Byzantine ratio and non-IID degree greater than 0.5 with the assistance of a root dataset. Our key idea is to adequately learn vector features of benign local model parameters using Deep Support Vector Data Description (Deep SVDD) and achieve the optimal classification by training a deep learning-based one-class classifier equipped with a proper decision boundary based on the root dataset. To further optimize the classifier, we employ a regularizer based on random noise injections, which can address the hypersphere collapse problem inherent in Deep SVDD. Exhaustive experiments on MNIST, F-MNIST, and CIFAR-10 demonstrate that compared with five typical Byzantine-robust methods, our defense strategy achieves excellent effectiveness in mitigating targeted/untargeted poisoning attacks and an adaptive attack in FL. Even with a similar but different root dataset, it still maintains a good Byzantine robustness.
KW - Byzantine robustness
KW - deep one-class classification
KW - Federated learning
KW - poisoning attacks
UR - http://www.scopus.com/inward/record.url?scp=105003698534&partnerID=8YFLogxK
U2 - 10.1109/TCCN.2025.3564476
DO - 10.1109/TCCN.2025.3564476
M3 - Article
AN - SCOPUS:105003698534
SN - 2332-7731
JO - IEEE Transactions on Cognitive Communications and Networking
JF - IEEE Transactions on Cognitive Communications and Networking
ER -