As educational approaches increasingly adopt digital formats, data logs create a newfound wealth of information about student behaviors in those learning environments for educators and researchers. Yet making sense of that information, particularly to inform pedagogy, remains a significant challenge. Data from digital sensors that sample at the millisecond level of granularity, such as computer mouses or touchscreens, is notoriously difficult to computationally process and mine for patterns. Adding to the challenge is the limited domain knowledge of this biological sensor level of interaction which prohibits a comprehensive manual feature engineering approach to utilize those data streams. In this paper, we attempt to enhance the assessment capability of a touchscreen-based tutoring system by using Recurrent Neural Networks (RNNs) to predict students’ strategies from their 60hz data streams. We hypothesize that the ability of neural networks to learn representations automatically, instead of relying on human feature engineering, may benefit this classification task. Our classification models (including majority class) were trained and cross-validated at several levels on historical data which had been human coded with learners’ strategies. Our RNN approach to this difficult classification task moderately advances performance above logistic regression. We discuss the implications of this performance for enabling greater tutoring system autonomy. We also present visualizations that illustrate how this neural network approach to modeling sensor data can reveal patterns detected by the RNN. The surfaced patterns, regularized from a larger superset of mostly uncoded data, underscore the mix of normative and seemingly idiosyncratic behavior that characterizes the state space of learning at this high frequency level of observation.