Formal mathematical reasoning is unique in its precision: any valid conclusion can be justified by a sequence of base axioms. But human-written proofs or solutions rarely operate at that level. Instead, obvious steps are skipped to provide a simple, lucid argument. This is especially important in an educational setting, where too many details in an example solution, or too few, can confuse a student. What are the key steps for humans in a given formal solution? We investigate several computational hypotheses in the context of equation solving. Specifically, we take a reinforcement learning agent that solves equations using low-level axioms, and propose a series of methods for abstracting its solutions by selecting key steps. We consider methods based on the semantic distance between subsequent steps, based on the steps with the highest uncertainty for the agent, and based on transitions between latent "high-level skills" learned from a large number of agent-produced solutions. In a human evaluation we find that skill-base simplifications were judged most useful. These results suggest new directions for understanding human mathematical reasoning.