- Main
An Examination of Perseveration Terms in Reinforcement Learning Models
Abstract
Perseveration, or stickiness parameters have been added to reinforcement-learning (RL) models to capture autocorrelationin choices. Here, we systematically examined whether perseveration terms simply improve a models ability to fit noisein the data, thereby making them overly flexible. We simulated data with basic versions of a Delta and Prediction-ErrorDecay model with no perseveration terms added, and for half of the simulated data sets we added random noise to expectedRL values on each trial. We then performed cross-fitting analyses where the simulated data sets were fit by the basicdata-generating models as well as extended models with perseveration terms added. The addition of perseveration termsimproved model fit, particularly when noise was added to the simulation process. Parameter recovery was generally poorerfor the extended models. These results suggest simpler models may be more useful for prediction and generalization tonovel environments, as well as for theory development.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-