Nonword reading: Comparing dual-route cascaded and connectionist dual-process models with human data

Stephen C. Pritchard*, Max Coltheart, Sallyanne Palethorpe, Anne Castles

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    37 Citations (Scopus)

    Abstract

    Two prominent dual-route computational models of reading aloud are the dual-route cascaded (DRC) model, and the connectionist dual-process plus (CDP+) model. While sharing similarly designed lexical routes, the two models differ greatly in their respective nonlexical route architecture, such that they often differ on nonword pronunciation. Neither model has been appropriately tested for nonword reading pronunciation accuracy to date. We argue that empirical data on the nonword reading pronunciation of people is the ideal benchmark for testing. Data were gathered from 45 Australian-English-speaking psychology undergraduates reading aloud 412 nonwords. To provide contrast between the models, the nonwords were chosen specifically because DRC and CDP+ disagree on their pronunciation. Both models failed to accurately match the experiment data, and both have deficiencies in nonword reading performance. However, the CDP+ model performed significantly worse than the DRC model. CDP++, the recent successor to CDP+, had improved performance over CDP+, but was also significantly worse than DRC. In addition to highlighting performance shortcomings in each model, the variety of nonword responses given by participants points to a need for models that can account for this variety.

    Original languageEnglish
    Pages (from-to)1268-1288
    Number of pages21
    JournalJournal of Experimental Psychology: Human Perception and Performance
    Volume38
    Issue number5
    DOIs
    Publication statusPublished - Oct 2012

    Fingerprint

    Dive into the research topics of 'Nonword reading: Comparing dual-route cascaded and connectionist dual-process models with human data'. Together they form a unique fingerprint.

    Cite this