Generalization From Newly Learned Words Reveals Structural Properties of the Human Reading System

    loading  Checking for direct PDF access through Ovid

Abstract

Connectionist accounts of quasiregular domains, such as spelling–sound correspondences in English, represent exception words (e.g., pint) amid regular words (e.g., mint) via a graded “warping” mechanism. Warping allows the model to extend the dominant pronunciation to nonwords (regularization) with minimal interference (spillover) from the exceptions. We tested for a behavioral marker of warping by investigating the degree to which participants generalized from newly learned made-up words, which ranged from sharing the dominant pronunciation (regulars), a subordinate pronunciation (ambiguous), or a previously nonexistent (exception) pronunciation. The new words were learned over 2 days, and generalization was assessed 48 hr later using nonword neighbors of the new words in a tempo naming task. The frequency of regularization (a measure of generalization) was directly related to degree of warping required to learn the pronunciation of the new word. Simulations using the Plaut, McClelland, Seidenberg, and Patterson (1996) model further support a warping interpretation. These findings highlight the need to develop theories of representation that are integrally tied to how those representations are learned and generalized.

Related Topics

    loading  Loading Related Articles