The 21st century has seen an explosion in the availability of economic data, and machine learning tools for making predictions from that data. Motivated by these developments, in this dissertation, I consider the broad question of: what, if anything, can machine learning contribute to macroeconomic policy-making? In Chapter 2, I begin with a case study of a pure prediction problem in Icelandic tax data, and show that machine learning is quantitatively and qualitatively useful for this problem. But in economic policy settings, we want to predict the effect of an intervention, a much more challenging problem than the standard supervised learning task. Therefore, the rest of my dissertation focuses on using machine learning for observational causal inference. In Part 2, I consider the “no unobserved confounders” case, where we assume that we observe all of the relevant covariates. In this setting, the causal inference problem reduces to a prediction task under covariate shift, and we can debias causal effect estimates using the density ratio- the object that measures how the covariate distributions shift. In high dimensions, density ratios are typically not well behaved, and I help make progress on this front in Chapter 3 by drawing connections between density ratio estimation and so-called “balancing weights” estimators via a duality argument. Then in Chapter 4, I apply these results to obtain a broad set of numerical equivalence results for debiased machine learning estimators, which results in a number of implications for undersmoothing and hyperparameter tuning in practice. In Part 3, I turn to the setting where we do potentially have unobserved confounders, making unbiased recovery of the causal effect impossible. Instead, we use “sensitivity analysis”, which measures how quickly the estimated causal relationship degrades with hypothetical confounding. Of particular relevance to macroeconomics, I develop algorithms for the dynamic setting where causal effects unroll over time, adopting the Reinforcement Learning framework. Chapter 5 considers the tabular setting, and Chapter 6 extends these results to function approximation with machine learning.