- Main
Categorical Belief Updating Under Uncertainty
Abstract
The need to update our estimates of probabilities (e.g., the accuracy of a test) given new information is commonplace. Ideally, a new instance (e.g., a correct report) would just be added to the tally, but we are often uncertain whether a new instance has occurred. We present an experiment where participants receive conflicting reports from two early-warning cancer tests, where one has higher historical accuracy (HA). We present a model showing that while uncertain which test is correct, estimates of the accuracy of both tests should be reduced. However, among our participants, we find two dominant approaches: (1) participants increase the more HA test, reducing the other; (2) participants make no change to either. Based on mixed methods we argue that both approaches represent two sides of a ‘binary’ decision i.e., (1) update as if we have complete certainty which test is correct and (2) update as if we have no information.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-