The human brain effortlessly solves problems that still pose a challenge for modern computers, such as recognizing patterns in natural images. Many of these problems can be formulated in terms of Bayesian inference, including planning motor movements, combining cues from different modalities, and making predictions. Recent work in psychology and neuroscience suggests that human behavior is often consistent with Bayesian inference. However, most research using probabilistic models has focused on formulating the abstract problems behind cognitive tasks and their optimal solutions, rather than considering mechanisms that could implement these solutions. Therefore, it is critical to understand the psychological models and neural implementations that carry out these notoriously challenging computations.
Exemplar models are a successful class of psychological process models that use an inventory of stored examples to solve problems such as identification, categorization, and function learning. We show that exemplar models can be used to perform a sophisticated form of Monte Carlo approximation known as importance sampling, and thus provide a way to perform approximate Bayesian inference. Simulations of Bayesian inference in speech perception, generalization along a single dimension, making predictions about everyday events, concept learning, and reconstruction from memory show that exemplar models can often account for human performance with only a few exemplars, for both simple and relatively complex prior distributions. These results suggest that exemplar models provide a possible mechanism for implementing at least some forms of Bayesian inference.
The goal of perception is to infer the hidden states in the hierarchical process by which sensory data are generated, a problem that can be solved optimally using Bayesian inference. Here we propose a simple mechanism for Bayesian inference which involves averaging over a few feature detection neurons which fire at a rate determined by their similarity to a sensory stimulus. This mechanism is again based on importance sampling. Moreover, many cognitive and perceptual tasks involve multiple levels of abstraction, which results in ``hierarchical'' models. We show that a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We identify a scheme for implementing importance sampling with spiking neurons, and show that this scheme can account for human behavior in sensorimotor integration, cue combination, and orientation perception.
Another important function of nervous system is to process temporal information in the dynamical environment, such as motion coordination where the system's state is estimated sequentially based on the constant perceptual feedback. Our study suggests that a neural network structure similar to recursive importance sampling can solve the sequential estimation problem by approximating the posterior updates. This algorithm performs as well as the state-of-the-art sequential Monte Carlo methods know as particle filtering and fulfills many constraints of the biological system. Studying the detailed neural implementation of this algorithm finds an interesting resemblance to neural circuits in cerebellum.