- Main
The Symphony of Alignment: Ensuring Fairness and Mitigating Bias in Foundation Models
- Wang, Jialu
- Advisor(s): Liu, Yang
Abstract
Foundation models are poised to revolutionize decision-making across various domains, but their reliance on historical data can perpetuate and amplify existing biases. This risk of reinforcing societal stereotypes through biased outputs underscores the critical need to evaluate and mitigate biases in these models to ensure their responsible and ethical use. In this dissertation, we delve into three critical challenges in ensuring fairness and mitigating bias in foundation models and AI systems. It comprises three main contributions: (1) An exploration of fair learning under uncertainty, particularly when sensitive attributes are corrupted. The research proposes noise-resistant fair Empirical Risk Minimization approaches and a novel method for detecting groups with higher noise levels in labels. (2) An investigation into fairness and bias in multimodal applications of foundation models, including image search, multilingual text retrieval, and text-to-image generation. The study develops new intervention methods for mitigating gender bias in image search, reveals intrinsic trade-offs in multilingual fairness, and introduces association test in text-to-image generations. (3) The development of fairness influence functions to quantify the impact of individual data examples on model fairness. This approach offers insights into machine unlearning, with efficient approximation techniques for large-scale applications. Ultimately, the thesis strives to advance the understanding of fairness in foundation models through the development of both theoretical frameworks and practical evaluations for responsible AI.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-