Computational models of visual working memory (VWM)
generally fall into two categories: slots-based models and
resources-based models. Slots-based models theorise that the
capacity of memory is defined by a finite number of items.
Each slot can only contain one item and once an item is in
memory it is remembered accurately. If an item is not in
memory, however, there is no memory of the item at all. By
contrast, resources-based models claim that all items, rather
than just a few enter memory. However, unlike the slots model
they are not necessarily remembered accurately. On the
surface, these models appear to make distinct predictions.
However, as these models have been developed and expanded
to capture empirical data, they have begun to mimic each other.
Further complicating matters, Donkin, Kary, Tahir and Taylor
(2016) proposed that observers were capable of using either
slot- or resource-based encoding strategies. In the current
experiment, we aimed to test the claim that observers adapt
their encoding strategies depending on the task environment by
observing how participants move their eyes in a VWM
experiment. We ran participants on a standard colour recall task
(Zhang and Luck, 2008) while tracking their eye movements.
All participants were asked to remember either 3 or 6 items in
a given trial, and we manipulated whether the number of items
was held constant for a block of trials, or varied randomly. We
expected to see participants use more resource-like encoding
when the number of items to remember was predictable.
Contrary to these expectations, we observed no difference
between blocked and unblocked conditions. Further, the eye
gaze data was only very weakly related to behaviour in the task.
We conclude that caution should be taken in interpreting eye
gaze data in VWM experiments.