Sources of hallucination by Large Language Models on inference tasks

Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman

Research output: Chapter in Book/Report/Conference proceedingConference proceeding contributionpeer-review

11 Citations (Scopus)

Abstract

Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI), necessary for applied tasks like question answering and summarization. We present a series of behavioral studies on several LLM families (LLaMA, GPT-3.5, and PaLM) which probe their behavior using controlled experiments. We establish two biases originating from pretraining which predict much of their behavior, and show that these are major sources of hallucination in generative LLMs. First, memorization at the level of sentences: we show that, regardless of the premise, models falsely label NLI test samples as entailing when the hypothesis is attested in training data, and that entities are used as “indices” to access the memorized data. Second, statistical patterns of usage learned at the level of corpora: we further show a similar effect when the premise predicate is less frequent than that of the hypothesis in the training data, a bias following from previous studies. We demonstrate that LLMs perform significantly worse on NLI test samples which do not conform to these biases than those which do, and we offer these as valuable controls for future LLM evaluation.

Original languageEnglish
Title of host publicationThe 17th Conference of the European Chapter of the Association for Computational Linguistics
Subtitle of host publicationFindings of EACL 2023
Place of PublicationStroudsburg, PA
PublisherAssociation for Computational Linguistics (ACL)
Pages2758-2774
Number of pages17
ISBN (Electronic)9781959429470
DOIs
Publication statusPublished - 2023
EventConference of the European Chapter of the Association for Computational Linguistics (17 : 2023) - Dubrovnik, Croatia
Duration: 2 May 20236 May 2023
Conference number: 17th

Conference

ConferenceConference of the European Chapter of the Association for Computational Linguistics (17 : 2023)
Country/TerritoryCroatia
CityDubrovnik
Period2/05/236/05/23

Fingerprint

Dive into the research topics of 'Sources of hallucination by Large Language Models on inference tasks'. Together they form a unique fingerprint.

Cite this