Cognitive scientists often conduct experiments that require a single participant to complete multiple – often hundreds of – repetitions of the same task (i.e., experiments with multiple trials). By having many observations per participant, these studies improve the measurement properties of the outcome they are designed to assess. However, repeating the same task results in systematic changes in cognitive processes over time that could also affect the reliability and validity of the outcome measure. To date, there is no modelling framework that can easily capture these systematic cognitive changes across multiple, independent trials. Response time models are an increasingly popular way of inferring cognitive processes such as caution, information processing efficiency, and bias, but they assume that these processes do not change over the course of an experiment. In this study, we extended a popular response time model, the Diffusion Decision Model, to be able to capture systematic changes in its parameters across trials. We focused on two processes that should vary in almost all multi-trial experiments: information processing efficiency (increase over time as people get better at the task) and caution (decrease over time as people learn the task demands). Using model comparison methods on pre-existing data, we showed that both processes varied systematically across time for a large proportion of participants, even when the experiment that generated the data did not explicitly intend to manipulate either of those processes.