Skip to main content
eScholarship
Open Access Publications from the University of California

Biologically-Based Neural Representations Enable Fast Online Shallow Reinforcement Learning

Creative Commons 'BY' version 4.0 license
Abstract

Biological brains learn much more quickly than standard deep neural network reinforcement learning algorithms. One reason for this is that the deep neural networks need to learn a representation that is appropriate for the task at hand, whilst biological systems already possess an appropriate representation. Here, we bypass this problem by imposing on the neural network a representation based on what is observed in biology, such as grid cells. This study explores the impact of using a biologically-inspired grid-cell representation vs. a one-hot representation, on the speed at which a Temporal Difference-based Actor-Critic network learns to solve a simple 2D grid-world reinforcement learning task. The results suggest that the use of grid cells does promote faster learning. Furthermore, the grid cells implemented here have the potential for accurately representing unbounded continuous space. Thus, their promising performance on this discrete task acts as a first step in exploring their utility for reinforcement learning in continuous space.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View