Navigating the web of disinformation and misinformation: large language models as double-edged swords

Siddhant Bikram Shah, Surendrabikram Thapa, Ashish Acharya, Kritesh Rauniyar, Sweta Poudel, Sandesh Jain, Anum Masood, Usman Naseem

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

This paper explores the dual role of Large Language Models (LLMs) in the context of online misinformation and disinformation. In today’s digital landscape, where the internet and social media facilitate the rapid dissemination of information, discerning between accurate content and falsified information presents a formidable challenge. Misinformation, often arising unintentionally, and disinformation, crafted deliberately, are at the forefront of this challenge. LLMs such as OpenAI’s GPT-4, equipped with advanced language generation abilities, present a double-edged sword in this scenario. While they hold promise in combating misinformation by fact-checking and detecting LLM-generated text, their ability to generate realistic, contextually relevant text also poses risks for creating and propagating misinformation. Further, LLMs are plagued with many problems such as biases, knowledge cutoffs, and hallucinations, which may further perpetuate misinformation and disinformation. The paper outlines historical developments in misinformation detection and how it affects social media consumption, especially among youth, and introduces LLMs and their applications in various domains. It then critically analyzes the potential of LLMs to generate and counter misinformation and disinformation in sensitive topics such as healthcare, COVID-19, and political agendas. Further, it discusses mitigation strategies, ethical considerations, and regulatory measures, summarizing previous methods and proposing future research direction toward leveraging the benefits of LLMs while minimizing misuse risks. The paper concludes by acknowledging LLMs as powerful tools with significant implications in both spreading and combating misinformation in the digital age.

Original languageEnglish
Number of pages21
JournalIEEE Access
DOIs
Publication statusE-pub ahead of print - 29 May 2024

Keywords

  • ChatGPT
  • Computational Social Sciences
  • Disinformation
  • Fake news
  • Feature extraction
  • Hallucinations in LLMs
  • Information integrity
  • Large language models
  • Large Language Models
  • Market research
  • Navigation
  • Neural networks
  • Social networking (online)
  • Social sciences

Fingerprint

Dive into the research topics of 'Navigating the web of disinformation and misinformation: large language models as double-edged swords'. Together they form a unique fingerprint.

Cite this