Knowing that something is unknown is an important part of human cognition. While Bayesian models of cognition have been successful in explaining many aspects of human learning, current explanations of how humans realise that they need to introduce a new concept are problematic. Bayesian models lack a principled way to describe ignorance, as doing so requires comparing the probabilities of concepts in the model with the probabilities of concepts not present in the model, which is by definition impossible. Formal definitions of uncertainty (e.g. Shannon-entropy) are commonly used as a substitute for ignorance, but we will show that these concepts are fundamentally distinct, and thus that something more is needed. Enhancing probability theory to allow Bayesian agents to conclude that they are ignorant would be an important advance for both cognitive engineering and cognitive science. In this research project, we formally analyse this challenge.