TY - JOUR
T1 - A stimulus sampling theory of letter identity and order
AU - Norris, Dennis
AU - Kinoshita, Sachiko
AU - van Casteren, Maarten
PY - 2010/4
Y1 - 2010/4
N2 - Early on during word recognition, letter positions are not accurately coded. Evidence for this comes from transposed-letter (TL) priming effects, in which letter strings generated by transposing two adjacent letters (e.g., jugde) produce large priming effects, more than primes with the letters replaced in the corresponding position (e.g., junpe). Dominant accounts of TL priming effect such as the Open Bigrams model (Grainger & van Heuven, 2003; Whitney & Cornelissen, 2008) and the SOLAR model (Davis & Bowers, 2006) explain this effect by proposing a higher level of representation than individual letter identities in which letter position is not coded accurately. An alternative to this is to assume that position coding is noisy (e.g., Gomez, Ratcliff, & Perea, 2008). We propose an extension to the Bayesian Reader (Norris, 2006) that incorporates letter position noise during sampling from perceptual input. This model predicts "leakage" of letter identity to nearby positions, which is not expected from models incorporating alternative position coding schemes. We report three masked priming experiments testing predictions from this model. Crown
AB - Early on during word recognition, letter positions are not accurately coded. Evidence for this comes from transposed-letter (TL) priming effects, in which letter strings generated by transposing two adjacent letters (e.g., jugde) produce large priming effects, more than primes with the letters replaced in the corresponding position (e.g., junpe). Dominant accounts of TL priming effect such as the Open Bigrams model (Grainger & van Heuven, 2003; Whitney & Cornelissen, 2008) and the SOLAR model (Davis & Bowers, 2006) explain this effect by proposing a higher level of representation than individual letter identities in which letter position is not coded accurately. An alternative to this is to assume that position coding is noisy (e.g., Gomez, Ratcliff, & Perea, 2008). We propose an extension to the Bayesian Reader (Norris, 2006) that incorporates letter position noise during sampling from perceptual input. This model predicts "leakage" of letter identity to nearby positions, which is not expected from models incorporating alternative position coding schemes. We report three masked priming experiments testing predictions from this model. Crown
UR - http://www.scopus.com/inward/record.url?scp=77949275862&partnerID=8YFLogxK
U2 - 10.1016/j.jml.2009.11.002
DO - 10.1016/j.jml.2009.11.002
M3 - Article
AN - SCOPUS:77949275862
VL - 62
SP - 254
EP - 271
JO - Journal of Memory and Language
JF - Journal of Memory and Language
SN - 0749-596X
IS - 3
ER -