Membership inference attack on differentially private block coordinate descent

Shazia Riaz, Saqib Ali*, Guojun Wang*, Muhammad Ahsan Latif, Muhammad Zafar Iqbal

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
43 Downloads (Pure)

Abstract

The extraordinary success of deep learning is made possible due to the availability of crowd-sourced large-scale training datasets. Mostly, these datasets contain personal and confidential information, thus, have great potential of being misused, raising privacy concerns. Consequently, privacy-preserving deep learning has become a primary research interest nowadays. One of the prominent approaches adopted to prevent the leakage of sensitive information about the training data is by implementing differential privacy during training for their differentially private training, which aims to preserve the privacy of deep learning models. Though these models are claimed to be a safeguard against privacy attacks targeting sensitive information, however, least amount of work is found in the literature to practically evaluate their capability by performing a sophisticated attack model on them. Recently, DP-BCD is proposed as an alternative to state-of-the-art DP-SGD, to preserve the privacy of deep-learning models, having low privacy cost and fast convergence speed with highly accurate prediction results. To check its practical capability, in this article, we analytically evaluate the impact of a sophisticated privacy attack called the membership inference attack against it in both black box as well as white box settings. More precisely, we inspect how much information can be inferred from a differentially private deep model’s training data. We evaluate our experiments on benchmark datasets using AUC, attacker advantage, precision, recall, and F1-score performance metrics. The experimental results exhibit that DP-BCD keeps its promise to preserve privacy against strong adversaries while providing acceptable model utility compared to state-of-the-art techniques.

Original languageEnglish
Article numbere1616
Pages (from-to)1-39
Number of pages39
JournalPeerJ Computer Science
Volume9
DOIs
Publication statusPublished - 2023

Bibliographical note

Copyright the Author(s) 2023. Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Keywords

  • Membership inference attack
  • Differential privacy
  • Privacy-preserving deep learning
  • Differentially private block coordinate descent

Fingerprint

Dive into the research topics of 'Membership inference attack on differentially private block coordinate descent'. Together they form a unique fingerprint.

Cite this