- Main
Applications of Probabilistic Reasoning in Trustworthy AI: From Handling Missing Data to Explainability
- Khosravi, Pasha
- Advisor(s): Van den Broeck, Guy
Abstract
Machine Learning models are becoming increasingly ubiquitous in our daily lives. Despite their advancements, common challenges such as missing data, outliers, and complex behaviors still hinder their usability and trustworthiness. Handling uncertainty is a fundamental aspect of modern machine learning which can potentially help with these challenges. Specifically, in this thesis, we highlight the strengths of tractable probabilistic models, such as probabilistic circuits, in mitigating these issues.
First, we introduce "Expected Prediction" (EXP) as a new type of probabilistic query and examine its computational complexity for a few families of models. We then demonstrate how computing EXP can help address the additional uncertainty arising from missing data in a principled manner. Next, we illustrate how EXP can be used to generate sufficient explanations for classifier decisions while offering probabilistic guarantees. Finally, we leverage the supermodularity of marginal queries and continuous relaxations like multi-linear extension to scale the detection of the root causes of outliers in high-dimensional settings.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-