Investigating individual differences in children's real-time sentence comprehension using language-mediated eye movements

Kate Nation*, Catherine M. Marshall, Gerry T.M. Altmann

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    107 Citations (Scopus)


    Individual differences in children's online language processing were explored by monitoring their eye movements to objects in a visual scene as they listened to spoken sentences. Eleven skilled and 11 less-skilled comprehenders were presented with sentences containing verbs that were either neutral with respect to the visual context (e.g., Jane watched her mother choose the cake, where all of the objects in the scene were choosable) or supportive (e.g., Jane watched her mother eat the cake, where the cake was the only edible object). On hearing the supportive verb, the children made fast anticipatory eye movements to the target object (e.g., the cake), suggesting that children extract information from the language they hear and use this to direct ongoing processing. Less-skilled comprehenders did not differ from controls in the speed of their anticipatory eye movements, suggesting normal sensitivity to linguistic constraints. However, less-skilled comprehenders made a greater number of fixations to target objects, and these fixations were of a duration shorter than those observed in the skilled comprehenders, especially in the supportive condition. This pattern of results is discussed in terms of possible processing limitations, including difficulties with memory, attention, or suppressing irrelevant information.

    Original languageEnglish
    Pages (from-to)314-329
    Number of pages16
    JournalJournal of Experimental Child Psychology
    Issue number4
    Publication statusPublished - Dec 2003


    • Comprehension
    • Eye movements
    • Language development
    • Language impairment
    • Sentence processing


    Dive into the research topics of 'Investigating individual differences in children's real-time sentence comprehension using language-mediated eye movements'. Together they form a unique fingerprint.

    Cite this