In scientific research as well as in everyday reasoning, people are prone to a 'confirmation bias', i.e. they tend to select tests that fit the theories or beliefs they already entertain. This tendency has been criticized by philosophers of science as not optimal. The behavior has been studied in a variety of psychological experiments on controlled, small-scale simulations of scientific research. Applying elementary information-theory to sequential testing during rule discovery, this paper shows that the biased strategy is not necessarily a bad one, moreover, that it reflects a healthy propensity of the subject (or researcher) to optimize the expected information on each trial.