Projects per year
Abstract
Federated learning (FL) is a popular approach to facilitate privacy-aware machine learning since it allows multiple clients to collaboratively train a global model without granting others access to their private data. It is, however, known that FL can be vulnerable to membership inference attacks (MIAs), where the training records of the global model can be distinguished from the testing records. Surprisingly, research focusing on the investigation of the source inference problem appears to be lacking. We also observe that identifying a training record's source client can result in privacy breaches extending beyond MIAs. Seeking to contribute to the literature gap, we take the first step to investigate source privacy in FL. Specifically, we propose a new inference attack (hereafter referred to as source inference attack-SIA), designed to facilitate an honest-but-curious server to identify the training record's source client. The proposed SIAs leverage the Bayesian theorem to implement the attack in a non-intrusive manner without deviating from the defined FL protocol. We then evaluate SIAs in three different FL frameworks to show that in existing FL frameworks, the clients sharing gradients, model parameters, or predictions on a public dataset will leak such source information to the server. The experimental results validate the efficacy of the proposed SIAs, e.g., an attack success rate of 67.1% (baseline 10%) can be achieved when the clients share model parameters with the server. Comprehensive ablation studies demonstrate that the success of an SIA is directly related to the overfitting of the local models.
Original language | English |
---|---|
Pages (from-to) | 3012-3029 |
Number of pages | 18 |
Journal | IEEE Transactions on Dependable and Secure Computing |
Volume | 21 |
Issue number | 4 |
Early online date | 3 Oct 2023 |
DOIs | |
Publication status | Published - 2024 |
Fingerprint
Dive into the research topics of 'Source inference attacks: beyond membership inference attacks in federated learning'. Together they form a unique fingerprint.Projects
- 1 Finished
-
DE21 : Scalable and Deep Anomaly Detection from Big Data with Similarity Hashing
1/01/21 → 31/12/23
Project: Research