Universality versus language-specificity in listening to running speech

Anne Cutler*, Katherine Demuth, James M. McQueen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

43 Citations (Scopus)


Recognizing spoken language involves automatic activation of multiple candidate words. The process of selection between candidates is made more efficient by inhibition of embedded words (like egg in beg) that leave a portion of the input stranded (here, b). Results from European languages suggest that this inhibition occurs when consonants are stranded but not when syllables are stranded. The reason why leftover syllables do not lead to inhibition could be that in principle they might themselves be words; in European languages, a syllable can be a word. In Sesotho (a Bantu language), however, a single syllable cannot be a word, life report that in Sesotho, word recognition is inhibited by stranded consonants, but stranded monosyllables produce no more difficulty than stranded bisyllables (which could be Sesotho words). This finding suggests that the viability constraint which inhibits spurious embedded word candidates is not sensitive to language-specific word structure, but is universal.

Original languageEnglish
Pages (from-to)258-262
Number of pages5
JournalPsychological Science
Issue number3
Publication statusPublished - May 2002
Externally publishedYes


Dive into the research topics of 'Universality versus language-specificity in listening to running speech'. Together they form a unique fingerprint.

Cite this