Security challenges in natural language processing models

Qiongkai Xu, Xuanli He

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

7 Citations (Scopus)

Abstract

Large-scale natural language processing models have been developed and integrated into numerous applications, given the advantage of their remarkable performance. Nonetheless, the security concerns associated with these models prevent the widespread adoption of these black-box machine learning models. In this tutorial, we will dive into three emerging security issues in NLP research, i.e., backdoor attacks, private data leakage, and imitation attacks. These threats will be introduced in accordance with their threatening usage scenarios, attack methodologies, and defense technologies.
Original languageEnglish
Title of host publicationProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Subtitle of host publicationtutorial abstracts
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics
Pages7-12
Number of pages6
ISBN (Electronic)9788891760660
DOIs
Publication statusPublished - 2023
Event2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023 - Singapore, Singapore
Duration: 6 Dec 202310 Dec 2023

Conference

Conference2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023
Country/TerritorySingapore
CitySingapore
Period6/12/2310/12/23

Fingerprint

Dive into the research topics of 'Security challenges in natural language processing models'. Together they form a unique fingerprint.

Cite this