Co-linguistic content inferences: from gestures to sound effects and emoji

Robert Pasternak*, Lyn Tieu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)

Abstract

Among other uses, co-speech gestures can contribute additional semantic content to the spoken utterances with which they coincide. A growing body of research is dedicated to understanding how inferences from gestures interact with logical operators in speech, including negation (“not”/“n’t”), modals (e.g., “might”), and quantifiers (e.g., “each,” “none,” “exactly one”). A related but less addressed question is what kinds of meaningful content other than gestures can evince this same behaviour; this is in turn connected to the much broader question of what properties of gestures are responsible for how they interact with logical operators. We present two experiments investigating sentences with co-speech sound effects and co-text emoji in lieu of gestures, revealing a remarkably similar inference pattern to that of co-speech gestures. The results suggest that gestural inferences do not behave the way they do because of any traits specific to gestures, and that the inference pattern extends to a much broader range of content.

Original languageEnglish
Pages (from-to)1828–1843
Number of pages16
JournalQuarterly Journal of Experimental Psychology
Volume75
Issue number10
Early online date7 Apr 2022
DOIs
Publication statusPublished - 1 Oct 2022
Externally publishedYes

Keywords

  • co-linguistic content
  • gesture
  • emoji
  • semantics
  • pragmatics

Fingerprint

Dive into the research topics of 'Co-linguistic content inferences: from gestures to sound effects and emoji'. Together they form a unique fingerprint.

Cite this