Trained on massive amounts of data, Artificial Intelligence (AI) has achieved amazing results in generating realistic texts, images, videos, and more. However, the data that gives AI its capabilities - data produced by internet users, institutions, and organizations across the world - inevitably contains the biases and politics that many have hoped AI could overcome. This dissertation studies the way one kind of politics - authoritarian politics - gets embedded in AI's training data and its downstream implications. It seeks to answer the following questions: 1) How do authoritarian institutions shape the behavior and performance of AI? 2) What is the effect of AI on authoritarian politics? 3) how does the interaction between authoritarian politics and AI spill over into democracies?
Using novel experiments on commercial AI systems, open-source large language models, and open-source training data, this dissertation presents both theoretical and empirical evidence on how authoritarian politics serve to enhance but also constrain the utility of AI for repressive ends. Furthermore, it shows how authoritarian politics can spill over into democracies in subtle ways through AI.