People predict incoming words during online sentencecomprehension based on their knowledge of real-world eventsthat is cued by preceding linguistic contexts. We used thevisual world paradigm to investigate how event knowledgeactivated by an agent-verb pair is integrated with perceptualinformation about the referent that fits the patient role. Duringthe verb time window participants looked significantly more atthe referents that are expected given the agent-verb pair.Results are consistent with the assumption that event-basedknowledge involves perceptual properties of typicalparticipants. The knowledge activated by the agent iscompositionally integrated with knowledge cued by the verbto drive anticipatory eye movements during sentencecomprehension based on the expectations associated not onlywith the incoming word, but also with the visual features of itsreferent.