A primary problem for a child learning her first language
is that her ungrammatical utterances are rarely explicitly
corrected. It has been argued that this dearth of negative
evidence regarding the child's grammatical hypotheses
makes it impossible for the child to induce the grammar of
the language without substantial innate knowledge of
some universal principles common to all natural
grammars. However, recent connectionist models of
language acquisition have employed a learning technique
that circumvents the negative evidence problem.
Moreover, this learning strategy is not limited to strictly
connectionist architectures. What we call Incremental
Distributed Prediction Feedback refers to when the learner
simply listens to utterances in its environment and makes
internal predictions on-line as to what elements of the
grammar are more or less likely to immediately follow the
current input. Once that subsequent input is received,
those prediction contingencies (essentially, transitional
probabilities) are slightly adjusted accordingly.
Simulations with artificial grammars demonstrate that this
learning strategy is faster and more realistic than
depending on infrequent negative feedback to
ungrammatical output Incremental Distributed Prediction
Feedback allows the learner to produce its own negative
evidence from positive examples of the language by
comparing incrementally predicted input with actual input.