We present two models of visual location memory developed within the ACT-R cognitive architecture and compare the model’s performance to that of human participants in a pattern reproduction task. The snapshot model has a fovea-peripheral based activation mechanism, which simulates how more attention and processing resources are giv-en to the centre of the visual field for short stimulus expo-sure trials (50ms and 200ms). For long exposure trials (>=1s), a chunking model was developed based on the snap-shot model by adding chunking processes which can encode geometric patterns. Both models can match the task response accuracy and pause data of human participants. The results of the modelling reveal that for the short stimulus exposure trials the accuracy of recall is affected by the distance between the object location and the fovea vision location. For trials with long stimulus exposure times, participants were likely to use salient geometric patterns to encode the configuration of discs