Abstract
Large-scale language models have achieved tremendous success across various natural language processing (NLP) applications. Nevertheless, language models are vulnerable to backdoor attacks, which inject stealthy triggers into models for steering them to undesirable behaviors. Most existing backdoor attacks, such as data poisoning, require further (re)training or fine-tuning language models to learn the intended backdoor patterns. The additional training process however diminishes the stealthiness of the attacks, as training a language model usually requires long optimization time, a massive amount of data, and considerable modifications to the model parameters.
In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models. Our attack is achieved by injecting lexical triggers into the tokenizer of a language model via manipulating its embedding dictionary using carefully designed rules. These rules are explainable to human developers which inspires attacks from a wider range of hackers. The sparse manipulation of the dictionary also habilitates the stealthiness of our attack. We conduct extensive experiments on three dominant NLP tasks based on nine language models to demonstrate the effectiveness and universality of our attack. The code of this work is available at https://github.com/Jinxhy/TFLexAttack.
Original language | English |
---|---|
Title of host publication | The ACM Web Conference 2023 |
Subtitle of host publication | proceedings of the World Wide Web Conference WWW 2023 |
Place of Publication | New York |
Publisher | Association for Computing Machinery, Inc |
Pages | 2198-2208 |
Number of pages | 11 |
ISBN (Electronic) | 9781450394161 |
DOIs | |
Publication status | Published - 2023 |
Externally published | Yes |
Event | 2023 World Wide Web Conference, WWW 2023 - Austin, United States Duration: 30 Apr 2023 → 4 May 2023 |
Conference
Conference | 2023 World Wide Web Conference, WWW 2023 |
---|---|
Country/Territory | United States |
City | Austin |
Period | 30/04/23 → 4/05/23 |
Keywords
- Backdoor Attack
- Language Model
- Lexical Modification
- Tokenizer