Humans develop an ability for Theory of Mind (ToM) by theage of six, which enables them to infer another agent’s men-tal state and to differentiate it from one’s own. Much evi-dence suggests that humans can do this in a presumably op-timal way and, correspondingly, a Bayesian Theory of Mind(BToM) framework has been shown to match human infer-ences and attributions. Mostly, this has been investigated withspecific, explicit mentalizing tasks. However, other researchhas shown that humans often deviate from optimal reasoningin various ways. We investigate whether typical BToM modelsreally capture human ToM reasoning in tasks that solicit moreintuitive reasoning. We present results of an empirical studywhere humans deviate from Bayesian optimal reasoning in aToM task but instead exhibit egocentric tendencies. We alsodiscuss how computational models can better account for suchsub-optimal processing.