Abstract
Generative chatbots, such as ChatGPT, offer new possibilities for supporting learning and cognition among experts and novices. But they also pose new and significant risks to reality monitoring. For instance, chatbots adopt a conversational, confident, deliberate style that makes it harder for people to distinguish true from false. What is more, ChatGPT gives priority to accomplishing the task the user presents, even if that means doing so by providing misinformation. Finally, ChatGPT generates misinformation at an unprecedented scale. It threatens our individual, interpersonal, and institutional reality monitoring. When misinformation goes undetected, it threatens our trust in each other and in our institutions.
Original language | English |
---|---|
Pages (from-to) | 485-489 |
Number of pages | 5 |
Journal | Journal of Applied Research in Memory and Cognition |
Volume | 13 |
Issue number | 4 |
DOIs | |
Publication status | Published - Dec 2024 |
Keywords
- chatbots
- cognition
- large language model
- misinformation
- source monitoring