Those aren't your memories, they're somebody else's: seeding misinformation in chat bot memories

Conor Atkins*, Benjamin Zi Hao Zhao, Hassan Jameel Asghar, Ian Wood, Mohamed Ali Kaafar

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

1 Citation (Scopus)

Abstract

One of the new developments in chit-chat bots is a long-term memory mechanism that remembers information from past conversations for increasing engagement and consistency of responses. The bot is designed to extract knowledge of personal nature from their conversation partner, e.g., stating preference for a particular color. In this paper, we show that this memory mechanism can result in unintended behavior. In particular, we found that one can combine a personal statement with an informative statement that would lead the bot to remember the informative statement alongside personal knowledge in its long term memory. This means that the bot can be tricked into remembering misinformation which it would regurgitate as statements of fact when recalling information relevant to the topic of conversation. We demonstrate this vulnerability on the BlenderBot 2 framework implemented on the ParlAI platform and provide examples on the more recent and significantly larger BlenderBot 3 model. We generate 150 examples of misinformation, of which 114 (76%) were remembered by BlenderBot 2 when combined with a personal statement. We further assessed the risk of this misinformation being recalled after intervening innocuous conversation and in response to multiple questions relevant to the injected memory. Our evaluation was performed on both the memory-only and the combination of memory and internet search modes of BlenderBot 2. From the combinations of these variables, we generated 12,890 conversations and analyzed recalled misinformation in the responses. We found that when the chat bot is questioned on the misinformation topic, it was 328% more likely to respond with the misinformation as fact when the misinformation was in the long-term memory.

Original languageEnglish
Title of host publicationApplied Cryptography and Network Security
Subtitle of host publication21st International Conference, ACNS 2023, Kyoto, Japan, June 19–22, 2023, proceedings, part I
EditorsMehdi Tibouchi, XiaoFeng Wang
Place of PublicationCham
PublisherSpringer, Springer Nature
Pages284-308
Number of pages25
ISBN (Electronic)9783031334887
ISBN (Print)9783031334870
DOIs
Publication statusPublished - 2023
Event21st International Conference on Applied Cryptography and Network Security, ACNS 2023 - Kyoto, Japan
Duration: 19 Jun 202322 Jun 2023

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume13905
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference21st International Conference on Applied Cryptography and Network Security, ACNS 2023
Country/TerritoryJapan
CityKyoto
Period19/06/2322/06/23

Keywords

  • NLP
  • chat bots
  • memory
  • conversational AI
  • open domain dialogue
  • BlenderBot
  • misinformation

Fingerprint

Dive into the research topics of 'Those aren't your memories, they're somebody else's: seeding misinformation in chat bot memories'. Together they form a unique fingerprint.

Cite this