In syllogistic reasoning research, humans are predominantly evaluated on their capabilities to judge whether a conclusion necessarily follows from a set of premises. To tackle this limitation, we build on work by Evans, Handley, Harper, and Johnson-Laird (1999), and present two studies where we asked participants for possible and likely conclusions. Combined with previous data (containing necessary), we present a comprehensive dataset with responses for all syllogisms, offering individual patterns for all three argument types - a first of its kind. We discovered that likely serves as a middle ground between possible and necessary, paving the way to further investigate biases and preferences. Generally, individuals were able to handle the different notions, yet tended to interpret quantifiers in a pragmatic way, overlooking logical implicatures. Finally, we tested mReasoner, an implementation of the Mental Model Theory, and concluded that it was not able to capture the patterns observed in our data.