Training-free Lexical Backdoor Attacks on language models

Yujin Huang, Terry Yue Zhuo, Qiongkai Xu*, Han Hu, Xingliang Yuan*, Chunyang Chen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

23 Citations (Scopus)

Abstract

Large-scale language models have achieved tremendous success across various natural language processing (NLP) applications. Nevertheless, language models are vulnerable to backdoor attacks, which inject stealthy triggers into models for steering them to undesirable behaviors. Most existing backdoor attacks, such as data poisoning, require further (re)training or fine-tuning language models to learn the intended backdoor patterns. The additional training process however diminishes the stealthiness of the attacks, as training a language model usually requires long optimization time, a massive amount of data, and considerable modifications to the model parameters. 

In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models. Our attack is achieved by injecting lexical triggers into the tokenizer of a language model via manipulating its embedding dictionary using carefully designed rules. These rules are explainable to human developers which inspires attacks from a wider range of hackers. The sparse manipulation of the dictionary also habilitates the stealthiness of our attack. We conduct extensive experiments on three dominant NLP tasks based on nine language models to demonstrate the effectiveness and universality of our attack. The code of this work is available at https://github.com/Jinxhy/TFLexAttack.

Original languageEnglish
Title of host publicationThe ACM Web Conference 2023
Subtitle of host publicationproceedings of the World Wide Web Conference WWW 2023
Place of PublicationNew York
PublisherAssociation for Computing Machinery, Inc
Pages2198-2208
Number of pages11
ISBN (Electronic)9781450394161
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event2023 World Wide Web Conference, WWW 2023 - Austin, United States
Duration: 30 Apr 20234 May 2023

Conference

Conference2023 World Wide Web Conference, WWW 2023
Country/TerritoryUnited States
CityAustin
Period30/04/234/05/23

Keywords

  • Backdoor Attack
  • Language Model
  • Lexical Modification
  • Tokenizer

Fingerprint

Dive into the research topics of 'Training-free Lexical Backdoor Attacks on language models'. Together they form a unique fingerprint.

Cite this