Source inference attacks: beyond membership inference attacks in federated learning

Hongsheng Hu, Xuyun Zhang*, Zoran Salcic, Lichao Sun, Kim-Kwang Raymond Choo, Gillian Dobbie

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

9 Citations (Scopus)

Abstract

Federated learning (FL) is a popular approach to facilitate privacy-aware machine learning since it allows multiple clients to collaboratively train a global model without granting others access to their private data. It is, however, known that FL can be vulnerable to membership inference attacks (MIAs), where the training records of the global model can be distinguished from the testing records. Surprisingly, research focusing on the investigation of the source inference problem appears to be lacking. We also observe that identifying a training record's source client can result in privacy breaches extending beyond MIAs. Seeking to contribute to the literature gap, we take the first step to investigate source privacy in FL. Specifically, we propose a new inference attack (hereafter referred to as source inference attack-SIA), designed to facilitate an honest-but-curious server to identify the training record's source client. The proposed SIAs leverage the Bayesian theorem to implement the attack in a non-intrusive manner without deviating from the defined FL protocol. We then evaluate SIAs in three different FL frameworks to show that in existing FL frameworks, the clients sharing gradients, model parameters, or predictions on a public dataset will leak such source information to the server. The experimental results validate the efficacy of the proposed SIAs, e.g., an attack success rate of 67.1% (baseline 10%) can be achieved when the clients share model parameters with the server. Comprehensive ablation studies demonstrate that the success of an SIA is directly related to the overfitting of the local models.

Original languageEnglish
Pages (from-to)3012-3029
Number of pages18
JournalIEEE Transactions on Dependable and Secure Computing
Volume21
Issue number4
Early online date3 Oct 2023
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'Source inference attacks: beyond membership inference attacks in federated learning'. Together they form a unique fingerprint.

Cite this