FGLBA: enabling highly-effective and stealthy backdoor attack on federated graph learning

Qing Lu, Miao Hu*, Di Wu, Yipeng Zhou, Mohsen Guizani, Quan Z. Sheng

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

Abstract

Federated graph learning (FGL) has risen as a promising paradigm for collaboratively training graph neural networks while safeguarding data privacy. Nevertheless, the distributed nature of FGL also renders it susceptible to backdoor attacks. Although backdoor attacks are recognized as a significant threat to both centralized graph learning and federated learning (FL), the study of such attacks in FGL remains very limited. Current research on FGL backdoor attacks often merely adapts centralized graph backdoor attacks or FL backdoor attacks designed for image classification tasks to the FGL context, leaving key issues such as the effectiveness of triggers and the stealthiness of malicious models largely unexplored. To bridge this research gap, in this paper, we propose a novel backdoor attack, named FGLBA, targeting the FGL paradigm. Specifically, we design an input-aware trigger generator that generates a customized trigger for each target node based on its feature vector and neighborhood information, making that poisoned nodes injected with triggers are more likely misclassified into the category specified by the attacker. Additionally, we develop a stealthy federated backdoor training strategy that leverages collaborative optimization among multiple malicious clients to circumvent existing server-side defenses. The trigger generator and malicious clients' local models are iteratively optimized through a bilevel optimization framework, enabling the malicious models to achieve optimal attack performance under the optimal trigger generator. Extensive experiments on 4 real-world datasets demonstrate the effectiveness and superiority of our attack, outperforming all baseline attacks and successfully bypass 6 state-of-the-art and classical FL backdoor defenses.

Original languageEnglish
Title of host publicationICDM 2024: 24th IEEE International Conference on Data Mining
Subtitle of host publicationproceedings
EditorsElena Baralis, Kun Zhang, Ernesto Damiani, Merouane Debbah, Panos Kalnis, Xindong Wu
Place of PublicationPiscataway, NJ
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages791-796
Number of pages6
ISBN (Electronic)9798331506681
DOIs
Publication statusPublished - 2024
EventIEEE International Conference on Data Mining (24th : 2024) - Abu Dhabi, United Arab Emirates
Duration: 9 Dec 202412 Dec 2024

Publication series

NameProceedings - IEEE International Conference on Data Mining
ISSN (Print)1550-4786
ISSN (Electronic)2374-8486

Conference

ConferenceIEEE International Conference on Data Mining (24th : 2024)
Abbreviated titleICDM 2024
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period9/12/2412/12/24

Keywords

  • backdoor attack
  • federated graph learning

Fingerprint

Dive into the research topics of 'FGLBA: enabling highly-effective and stealthy backdoor attack on federated graph learning'. Together they form a unique fingerprint.

Cite this