Understanding the computational mechanisms enabling visuospatial reasoning is important for studying human intelli-gence as well as for exploring the possibility of introducing human-like reasoning into artificial intelligence systems. Inour work, we investigate how a collection of primitive image processing operations can be combined into different co-herent strategies for solving a range of visuospatial reasoning tasks. We evaluate our approach on 20 subtests from theLeiter International Performance Scale-Revised (Leiter-R). Through our computational experiments, we show that withonly four primitive operations similarity, containment, rotation, and scaling we can form strategies that solve, to differentdegrees of success, at least portions of 17 of the 20 subtests. These results lay foundations for our future work to study howintelligent agents can learn and generalize strategies from simple task definitions in order to perform complex visuospatialreasoning tasks.