Skip to main content
eScholarship
Open Access Publications from the University of California

The Effect of State Representations in Sequential Sensory Prediction: Introducingthe Shape Sequence Task

Creative Commons 'BY' version 4.0 license
Abstract

How do humans learn models supporting decision making?Reinforcement learning (RL) is a success story both in ar-tificial intelligence and neuroscience. Essential to these RLmodels are state representations. Based on what current statean animal or artificial agent is in, they learn optimal actionsby maximizing future expected reward. But how are humansable to learn and create representations of states? We introducea novel sequence prediction task with hidden structure whereparticipants have to combine learning and memory to find theproper state representation, without the task explicitly indicat-ing such structure. We show that humans are able to find thispattern, while a sensory prediction error version of RL cannot,unless equipped with appropriate state representations. Fur-thermore, in slight variations of the task, making it more diffi-cult for humans, the RL-derived model with simple state rep-resentations sufficiently describes behaviour and suggests thathumans fall back on simple state representations when a moreoptimal task representation cannot be found. We argue thistask allows to investigate previously proposed models of stateand task representations as well as supporting recent resultsindicating that RL describes a more general sensory predictionerror function for dopamine, rather than predictions focussedsolely on reward.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View