Deep learning backdoors

Shaofeng Li*, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

8 Citations (Scopus)

Abstract

In this chapter, we will give a comprehensive survey on backdoor attacks, mitigation and challenges and propose some open problems. We first introduce an attack vector that derives from the Deep Neural Network (DNN) model itself. DNN models are trained from gigantic data that may be poisoned by attackers. Different from the traditional poisoning attacks that interfere with the decision boundary, backdoor attacks create a “shortcut” in the model’s decision boundary. Such a “shortcut” can only be activated by a trigger known by the attacker itself, while it performs well on benign inputs without the trigger. We then show several mitigation techniques from the frontend to the backend of the machine learning pipeline. We finally provide avenues for future research. We hope to raise awareness about the severity of the current emerging backdoor attacks in DNNs and attempt to provide a timely solution to fight against them.

Original languageEnglish
Title of host publicationSecurity and artificial intelligence
Subtitle of host publicationa crossdisciplinary approach
EditorsLejla Batina, Thomas Bäck, Ileana Buhan, Stjepan Picek
Place of PublicationCham
PublisherSpringer, Springer Nature
Chapter13
Pages313-334
Number of pages22
ISBN (Electronic)9783030987954
ISBN (Print)9783030987947
DOIs
Publication statusPublished - 2022

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume13049
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Deep learning backdoors'. Together they form a unique fingerprint.

Cite this