- Main
Generalizing Outside the Training Set:When Can Neural Networks Learn Identity Effects?
Abstract
Often in language and other areas of cognition, whether twocomponents of an object are identical or not determine whetherit is well formed. We call such constraints identity effects.When developing a system to learn well-formedness from ex-amples, it is easy enough to build in an identify effect. But canidentity effects be learned from the data without explicit guid-ance? We provide a simple framework in which we can rig-orously prove that algorithms satisfying simple criteria cannotmake the correct inference. We then show that a broad classof algorithms including deep neural networks with standardarchitecture and training with backpropagation satisfy our cri-teria, dependent on the encoding of inputs. Finally, we demon-strate our theory with computational experiments in which weexplore the effect of different input encodings on the ability ofalgorithms to generalize to novel inputs.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-