While the role of states, corporations, and international organizations in AI
governance has been extensively theorized, the role of workers has received
comparatively little attention. This chapter looks at the role that workers
play in identifying and mitigating harms from AI technologies. Harms are the
causally assessed impacts of technologies. They arise despite technical
reliability and are not a result of technical negligence but rather of
normative uncertainty around questions of safety and fairness in complex social
systems. There is high consensus in the AI ethics community on the benefits of
reducing harms but less consensus on mechanisms for determining or addressing
harms. This lack of consensus has resulted in a number of collective actions by
workers protesting how harms are identified and addressed in their workplace.
We theorize the role of workers within AI governance and construct a model of
harm reporting processes in AI workplaces. The harm reporting process involves
three steps, identification, the governance decision, and the response. Workers
draw upon three types of claims to argue for jurisdiction over questions of AI
governance, subjection, control over the product of labor, and proximate
knowledge of systems. Examining the past decade of AI related worker activism
allows us to understand how different types of workers are positioned within a
workplace that produces AI systems, how their position informs their claims,
and the place of collective action in staking their claims. This chapter argues
that workers occupy a unique role in identifying and mitigating harms caused by
AI systems.