An analysis of adversarial attacks and defenses on autonomous driving models

Yao Deng, Xi Zheng, Tianyi Zhang, Chen Chen, Guannan Lou, Miryung Kim

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

8 Citations (Scopus)

Abstract

Nowadays, autonomous driving has attracted much attention from both industry and academia. Convolutional neural network (CNN) is a key component in autonomous driving, which is also increasingly adopted in pervasive computing such as smartphones, wearable devices, and IoT networks. Prior work shows CNN-based classification models are vulnerable to adversarial attacks. However, it is uncertain to what extent regression models such as driving models are vulnerable to adversarial attacks, the effectiveness of existing defense techniques, and the defense implications for system and middleware builders.

This paper presents an in-depth analysis of five adversarial attacks and four defense methods on three driving models. Experiments show that, similar to classification models, these models are still highly vulnerable to adversarial attacks. This poses a big security threat to autonomous driving and thus should be taken into account in practice. While these defense methods can effectively defend against different attacks, none of them are able to provide adequate protection against all five attacks. We derive several implications for system and middleware builders: (1) when adding a defense component against adversarial attacks, it is important to deploy multiple defense methods in tandem to achieve a good coverage of various attacks, (2) a black-box attack is much less effective compared with a white-box attack, implying that it is important to keep model details (e.g., model architecture, hyperparameters) confidential via model obfuscation, and (3) driving models with a complex architecture are preferred if computing resources permit as they are more resilient to adversarial attacks than simple models.

Original languageEnglish
Title of host publication18th Annual IEEE International Conference on Pervasive Computing and Communications, PerCom 2020
Place of PublicationPiscataway, NJ
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages10
ISBN (Electronic)9781728146577
DOIs
Publication statusPublished - 2020
EventIEEE International Conference on Pervasive Computing and Communications (2020 : 18th) - Austin, United States
Duration: 23 Mar 202027 Mar 2020

Publication series

NameInternational Conference on Pervasive Computing and Communications
PublisherIEEE
ISSN (Print)2474-2503

Conference

ConferenceIEEE International Conference on Pervasive Computing and Communications (2020 : 18th)
Abbreviated titlePerCom 2020
CountryUnited States
CityAustin
Period23/03/2027/03/20

Keywords

  • Autonomous driving
  • adversarial attack
  • defense

Fingerprint

Dive into the research topics of 'An analysis of adversarial attacks and defenses on autonomous driving models'. Together they form a unique fingerprint.

Cite this