- Main
Context variability promotes generalization in reading aloud:Insight from a neural network simulation
Abstract
How do neural network models of quasiregular domains learnto represent knowledge that varies in its consistency withthe domain, and generalize this knowledge appropriately?Recent work focusing on spelling-to-sound correspondencesin English proposes that a graded “warping” mechanismdetermines the extent to which the pronunciation of a newlylearned word should generalize to its orthographic neighbors.We explored the micro-structure of this proposal by training anetwork to pronounce new made-up words that were consistentwith the dominant pronunciation (regulars), were comprisedof a completely unfamiliar pronunciation (exceptions), orwere consistent with a subordinate pronunciation in English(ambiguous). Crucially, by training the same spelling-to-soundmapping with either one or multiple items, we tested whethervariation in adjacent, within-item context made a givenpronunciation more able to generalize. This is exactly whatwe found. Context variability, therefore, appears to act as amodulator of the warping in quasiregular domains.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-