DDoD: dual denial of decision attacks on human-AI teams

Benjamin Tag*, Niels Van Berkel, Sunny Verma, Benjamin Zi Hao Zhao, Shlomo Berkovsky, Dali Kaafar, Vassilis Kostakos, Olga Ohrimenko

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)
41 Downloads (Pure)

Abstract

Artificial intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient. However, such systems are also at constant risk of being attacked. While the majority of attacks targeting AI-based applications aim to manipulate classifiers or training data and alter the output of an AI model, recently proposed sponge attacks against AI models aim to impede the classifier's execution by consuming substantial resources. In this work, we propose dual denial of decision (DDoD) attacks against collaborative human-AI teams. We discuss how such attacks aim to deplete both computational and human resources, and significantly impair decision-making capabilities. We describe DDoD on human and computational resources and present potential risk scenarios in a series of exemplary domains.

Original languageEnglish
Pages (from-to)77-84
Number of pages8
JournalIEEE Pervasive Computing
Volume22
Issue number1
Early online date27 Feb 2023
DOIs
Publication statusPublished - 27 Mar 2023

Bibliographical note

Version archived for private and non-commercial use with the permission of the author/s and according to publisher conditions. For further rights please contact the publisher.

Keywords

  • Artificial intelligence
  • Data models
  • Predictive models
  • Task analysis
  • Training
  • Training data
  • Uncertainty

Fingerprint

Dive into the research topics of 'DDoD: dual denial of decision attacks on human-AI teams'. Together they form a unique fingerprint.

Cite this