- Main
A Connectionist Learning Model for 3-Dimenstional Mental Rotation, Zoom, and Pan
Abstract
A connectionist architecture is applied to the problem of 3-D visual representation. The Visual Perception System (VIPS) is organized as a flat, retinotopicallymapped array of 16K simple processors, each of which is driven by a coarselytuned binocular feature detector. By moving through its environment and observing how the visual field changes from state to state for various kinds of motion,VIPS learns to run internal simulations of 3-D visual experiences, e.g. mentalrotations of unfamiliar objects. Unlike traditional approaches to visual representation, VIPS learns to perform 3-D visual transformations purely from visualmotor experience, without actually constructing an explicit 3-D model of thevisual scene. Instead, the third dimension is represented implicitly in theknowledge as to how the pattern of activation on its fiat sheet of binocularlydriven processors will shift about as VIPS moves, or only imagines moving,through space. VIPS is argued to be more compatible with a variety ofphenomena from the psychology of 3-D perception than previous vision systems,particularly with respect to development, plasticity, and stability of perception, aswell as the analogical, linear-time mental rotation phenomena.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-