Skip to main content
eScholarship
Open Access Publications from the University of California

Context variability promotes generalization in reading aloud:Insight from a neural network simulation

Creative Commons 'BY' version 4.0 license
Abstract

How do neural network models of quasiregular domains learnto represent knowledge that varies in its consistency withthe domain, and generalize this knowledge appropriately?Recent work focusing on spelling-to-sound correspondencesin English proposes that a graded “warping” mechanismdetermines the extent to which the pronunciation of a newlylearned word should generalize to its orthographic neighbors.We explored the micro-structure of this proposal by training anetwork to pronounce new made-up words that were consistentwith the dominant pronunciation (regulars), were comprisedof a completely unfamiliar pronunciation (exceptions), orwere consistent with a subordinate pronunciation in English(ambiguous). Crucially, by training the same spelling-to-soundmapping with either one or multiple items, we tested whethervariation in adjacent, within-item context made a givenpronunciation more able to generalize. This is exactly whatwe found. Context variability, therefore, appears to act as amodulator of the warping in quasiregular domains.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View