EAR: an enhanced adversarial regularization approach against membership inference attacks

Hongsheng Hu, Zoran Salcic, Gillian Dobbie, Yi Chen, Xuyun Zhang*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

1 Citation (Scopus)


Membership inference attacks on a machine learning model aim to determine whether a given data record is a member of the training set. They pose severe privacy risks to individuals, e.g., identifying an individual's participation in a hospital's health analytic training set reveals that this individual was once a patient in that hospital. Adversarial regularization (AR) is one of the state-of-the-art defense methods that mitigate such attacks while preserving a model's prediction accuracy. AR adds membership inference attacks as a new regularization term to the target model during the training process. It is an adversarial training algorithm that is trained on a defended model which is essentially the same as training the generator of generative adversarial networks (GANs). We observe that many GAN variants are able to generate higher quality samples and offer more stability during the training phase than GANs. However, whether these GAN variants are available to improve the effectiveness of AR has not been investigated. In this paper, we propose an enhanced adversarial regularization (EAR) method based on Least Square GANs (LSGANs). The new EAR surpasses the existing AR in offering more powerful defensive ability while preserving the same prediction accuracy of the protected classifiers. We systematically evaluate EAR on five datasets with different target classifiers under four different attack methods and compare it with four other defense methods. We experimentally show that our new method performs the best among other defense methods.
Original languageEnglish
Title of host publicationIJCNN 2021 - International Joint Conference on Neural Networks, conference proceedings
Place of PublicationPiscataway, NJ
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages8
ISBN (Electronic)9780738133669
Publication statusPublished - 2021
Event2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Shenzhen, China
Duration: 18 Jul 202122 Jul 2021


Conference2021 International Joint Conference on Neural Networks, IJCNN 2021


  • Data privacy
  • Membership inference attacks
  • Adversarial regularization
  • Machine learning


Dive into the research topics of 'EAR: an enhanced adversarial regularization approach against membership inference attacks'. Together they form a unique fingerprint.

Cite this