How the brain uses reinforcement feedback to make simple choices that lead to reward is well understood. However, this ability is often considered insufficient to account for the flexibility and efficiency of human decision-making. In this chapter, we show that the computations of model-free reinforcement learning (RL) can in fact account for complex human learning abilities, such as generalization, transfer, and fast learning in high-dimensional, dynamic environments. Specifically, we show that humans structure their current information and choices into useful state and action spaces and that applying simple RL computations to these spaces-sometimes hierarchically-enables rich decision-making. Thus, RL computations enable humans to learn to represent the information they acquire in structured ways. Such structured RL simplifies complex problems (through representation learning), affords transfer of information (by building abstract rules and relating them to relevant contexts), and enables efficient exploration (by grouping together subsequences or identifying subpolicies).