In recent years, converging evidence has suggested that prediction plays a role in language comprehension, as it appears to do in information processing in a range of cognitive domains. Much of the evidence for this comes from the N400, a neural index of the processing of meaningful stimuli which has been argued to index the extent to which a word was predicted before it was encountered. The main aim of this thesis is to investigate the extent to which this prediction can be explained as arising from the statistics of the linguistic inputs we receive over the course of our lives, in line with predictive processing in other cognitive domains. To do this, I turn to language models—computational systems that can calculate the probability of a word given its context based on the statistics of language—and investigate how well their predictions correlate with the N400. The results show that probabilities calculated using language models are highly correlated with N400 amplitude, in many cases better than human-derived metrics such as cloze probability and plausibility, previously the best predictors of the N400. I also show that language model probabilities are able to qualitatively model a wide range of effects, showing significant differences based on the same experimental manipulations that lead to significant differences in N400 amplitude. In addition, the results show that language models that are better able to predict the next word in a sequence are better able to model N400 amplitude in both of these ways, showing both a closer fit to the data and more of the qualitative effects. Taken together, these results show a high degree of correlation between the N400 and predictions based on the statistics of language, consistent with the idea that the predictions indexed by the N400 are at least partly based on language statistics.