Wires crossed? On chatbots as threats to reality monitoring

Maryanne Garry*, Linda A. Henkel, Jeffrey L. Foster

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Generative chatbots, such as ChatGPT, offer new possibilities for supporting learning and cognition among experts and novices. But they also pose new and significant risks to reality monitoring. For instance, chatbots adopt a conversational, confident, deliberate style that makes it harder for people to distinguish true from false. What is more, ChatGPT gives priority to accomplishing the task the user presents, even if that means doing so by providing misinformation. Finally, ChatGPT generates misinformation at an unprecedented scale. It threatens our individual, interpersonal, and institutional reality monitoring. When misinformation goes undetected, it threatens our trust in each other and in our institutions.

    Original languageEnglish
    Pages (from-to)485-489
    Number of pages5
    JournalJournal of Applied Research in Memory and Cognition
    Volume13
    Issue number4
    DOIs
    Publication statusPublished - Dec 2024

    Keywords

    • chatbots
    • cognition
    • large language model
    • misinformation
    • source monitoring

    Cite this