Supramodal and modality-sensitive representations of perceived action categories in the human brain

Richard Ramsey*, Emily S. Cross, Antonia F. de C. Hamilton

*Corresponding author for this work

Research output: Contribution to journalArticle

3 Citations (Scopus)

Abstract

Seeing Suzie bite an apple or reading the sentence 'Suzie munched the apple' both convey a similar idea. But is there a common neural basis for action comprehension when generated through video or text? The current study used functional magnetic resonance imaging to address this question. Participants observed videos or read sentences that described two categories of actions: eating and cleaning. A conjunction analysis of video and sentence stimuli revealed that cleaning actions (compared to eating actions) showed a greater response in dorsal frontoparietal regions, as well as within the medial fusiform gyrus. These findings reveal supramodal representations of perceived actions in the human brain, which are specific to action categories and independent of input modality (video or written words). In addition, some brain regions associated with cleaning and eating actions showed an interaction with modality, which was manifested as a greater sensitivity for video compared with sentence stimuli. Together, this pattern of results demonstrates both supramodal and modality-sensitive representations of action categories in the human brain, a finding with implications for how we understand other people's actions from video and written sources.

Original languageEnglish
Pages (from-to)345-357
Number of pages13
JournalExperimental Brain Research
Volume230
Issue number3
DOIs
Publication statusPublished - Oct 2013
Externally publishedYes

Keywords

  • language
  • reading
  • motor system
  • action observation
  • social cognition

Fingerprint Dive into the research topics of 'Supramodal and modality-sensitive representations of perceived action categories in the human brain'. Together they form a unique fingerprint.

Cite this