- Main
Towards Probative Foundations for Bayesian Statistics
- Mwakima, David Mghanga
- Advisor(s): Weatherall, James O
Abstract
My dissertation addresses the question of how scientists evaluate the evidence they have for their claims. In the first chapter, I demonstrate the viability of using Bayesian methods in statistics to evaluate statistical evidence using an episode from the history of science involving Jean Baptiste Perrin, who was a French chemical physicist working in the early 20th century. This episode has fascinated philosophers of science because Perrin's experimental work that confirmed the atomic hypothesis (the view that matter is composed of atoms) has been cited as an illustration of the impact that strong evidence can have. For this reason, numerous accounts have been offered for why Perrin's evidence was so distinctive. Bayesian accounts of this episode have been quite influential. However, they have been criticized by philosophers because they face: (1) the ``Catch-all hypothesis'' problem (which is the problem of exhaustively specifying, in the space of hypotheses, the logical complement of a given hypothesis in order to compute the marginal likelihood function of the data --- this is the denominator in Bayes' theorem); and (2) the problem that any specification of priors in the Perrin case is ad hoc. In view of these difficulties, I provide a novel and more precise Bayesian account of this episode than those that have been offered and I argue that my account avoids these problems. In doing so, I contribute to the philosophy of statistics by showing the viability of using Bayes Factor (which is a Bayesian measure of the relative strength of evidence for two competing models or hypotheses) to quantify statistical evidence; and to the philosophy of science, where prominent realists and anti-realists today are charting a middle path forward in the realism-antirealism debate.
The other chapters of my dissertation address different aspects of the following question: ``How reliable are coherent Bayesian methods for evaluating statistical evidence in science?'' Coherent Bayesian methods are those Bayesian methods that satisfy the Likelihood Principle. This principle states that parametric statistical inference should be based on the equivalence class of functions of the parameter within a given statistical model in which the data are fixed (this equivalence class is also known as the likelihood function). For example, using the Bayes Factor to quantify statistical evidence is a coherent Bayesian method. Some statisticians and philosophers of statistics who argue against using coherent Bayesian methods argue that these methods conflict with other important desiderata that scientists have. These desiderata include: (1) calibrating inferences and predictions (where this involves providing an objective measure, or guarantee, of how often the inferences and predictions are verifiably correct), and (2) model assessment (where this involves probing or testing statistical models to determine their compatibility with the observed data). These desiderata are important because, taken together, they reflect the healthy skepticism scientists typically have towards their claims. This attitude involves probing those claims and quantifying the reliability of the inferences that they make supporting or disproving those claims. The lack of tools for satisfying these needs using coherent Bayesian methods is a serious indictment of those methods and is the primary reason given for why they are not yet widely adopted in practice --- this is the probativist criticism (from the word `probative' which means to test or to try).
Most philosophers who are sympathetic to the Bayesian approach either misconstrue the force of the probativist criticism or dismiss that criticism by rejecting some underlying assumption made by those who advance it. For example, one underlying assumption that is often rejected is that statistical modeling should be aiming at the truth. So, in chapter two, I sharpen the probativist criticism and I argue that it cannot be dismissed by rejecting one or more of its underlying assumptions. In chapter three, I turn to the Likelihood Principle and the normative constraints it places on coherent inference. Here I argue that the scope of the Likelihood Principle should be restricted to parametric inferences that involve point estimation. If the scope of Likelihood Principle can be so restricted, then my work here will contribute to laying the groundwork for introducing tools within the Bayesian framework for model assessment and will advance the debate regarding the possibility of probative foundations for Bayesian statistics.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-