Abstract
Adversarial examples, deliberately crafted using small perturbations to fool deep neural networks, were first studied in image processing and more recently in NLP. While approaches to detecting adversarial examples in NLP have largely relied on search over input perturbations, image processing has seen a range of techniques that aim to characterise adversarial subspaces over the learned representations.
In this paper, we adapt two such approaches to NLP, one based on nearest neighbors and influence functions and one on Mahalanobis distances. The former in particular produces a state-of-the-art detector when compared against several strong baselines; moreover, the novel use of influence functions provides insight into how the nature of adversarial example subspaces in NLP relate to those in image processing, and also how they differ depending on the kind of NLP task.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | IJCNLP-AACL 2023 |
Place of Publication | Stroudsburg |
Publisher | Association for Computational Linguistics |
Pages | 392-411 |
Number of pages | 20 |
ISBN (Electronic) | 9798891760189 |
DOIs | |
Publication status | Published - 2023 |
Event | 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Findings of the Association for Computational Linguistic, IJCNLP-AACL 2023 - Nusa Dua, Bali, Indonesia Duration: 1 Nov 2023 → 4 Nov 2023 |
Conference
Conference | 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Findings of the Association for Computational Linguistic, IJCNLP-AACL 2023 |
---|---|
Country/Territory | Indonesia |
City | Nusa Dua, Bali |
Period | 1/11/23 → 4/11/23 |