The development of models of human sentence processing hais traditionally followed one of two paths. Either the model posited a sequence of processing modules, each with its own taskspecific knowledge (e.g., syntax and semantics), or it posited a single processor utilizing different types of knowledge inextricably integrated into a monolithic knowledge base. Our previous work in modeling the sentence processor resulted in a model in which different processing modules used separate knowledge sources but operated in paral el to arrive at the interpretation of a sentence. One highlight of this model is that it offered an explanation of how the sentence processor might recover from an error in choosing the meaning of an ambiguous word: the semantic processor briefly pursued the different interpretations associated with the different meanings of the word in question until additional text confirmed one of them, or until processing limitations were exceeded. Errors in syntactic ambiguity resolution were assumed to be handled in some other way by a separate syntactic module. Recent experimental work by Laurie Stowe strongly suggests that the human sentence processor deals with syntactic error recovery using a mechanism very much like that proposed by our model of semantic error recovery. Another way to interpret Stowe's finding that two significantly different kinds of errors are handled in the same way is this: the human sentence processor consists of a single unified processing module utilizing multiple independent knowledge sources in parallel. A sentence processor built upon this architecture should at times exhibit behavior eissociated with modular approaches, and at other times act like an integrated system. In this paper we explore some of these ideas via a prototype computational model of sentence processing ca led C O M P E R E , and propose a set of psychological experiments for testing our theories.