Source inference attacks in federated learning

Hongsheng Hu, Zoran Salcic, Lichao Sun, Gillian Dobbie, Xuyun Zhang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

72 Citations (Scopus)

Abstract

Federated learning (FL) has emerged as a promising privacy-aware paradigm that allows multiple clients to jointly train a model without sharing their private data. Recently, many studies have shown that FL is vulnerable to membership inference attacks (MIAs) that can distinguish the training members of the given model from the non-members. However, existing MIAs ignore the source of a training member, i.e., the information of the client owning the training member, while it is essential to explore source privacy in FL beyond membership privacy of examples from all clients. The leakage of source information can lead to severe privacy issues. For example, identification of the hospital contributing to the training of an FL model for the COVID-19 pandemic can render the owner of a data record from this hospital more prone to discrimination if the hospital is in a high risk region. In this paper, we propose a new inference attack called source inference attack (SIA), which can derive an optimal estimation of the source of a training member. Specifically, we innovatively adopt the Bayesian perspective to demonstrate that an honest-but-curious server can launch an SIA to steal non-trivial source information of the training members without violating the FL protocol. The server leverages the prediction loss of local models on the training members to achieve the attack effectively and non-intrusively. We conduct extensive experiments on one synthetic and five real datasets to evaluate the key factors in an SIA, and the results show the efficacy of the proposed source inference attack.
Original languageEnglish
Title of host publicationProceedings, 21st IEEE International Conference on Data Mining (ICDM 2021)
EditorsJames Bailey, Pauli Miettinen, Yun Sing Koh, Dacheng Tao, Xindong Wu
Place of PublicationPiscataway, NJ
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages1102-1107
Number of pages6
ISBN (Electronic)9781665423984
DOIs
Publication statusPublished - 2021
Event21st IEEE International Conference on Data Mining, ICDM 2021 - Virtual, New Zealand
Duration: 7 Dec 202110 Dec 2021

Publication series

NameProceedings - IEEE International Conference on Data Mining, ICDM
Volume2021-December
ISSN (Print)1550-4786

Conference

Conference21st IEEE International Conference on Data Mining, ICDM 2021
Country/TerritoryNew Zealand
Period7/12/2110/12/21

Keywords

  • Federated learning
  • privacy leakage
  • inference attack

Fingerprint

Dive into the research topics of 'Source inference attacks in federated learning'. Together they form a unique fingerprint.

Cite this