The development of large language models (LLMs) enables the investigation of cognitive phenomena at an unprecedented scale. We applied LLM-derived measures on large narrative datasets to characterize the structure and dynamics of memory retrieval. Specifically, we found that autobiographical narratives flow less linearly from sentence to sentence than biographical narratives. Furthermore, the treatment of topics within biographies tends to be more coherent and are also written at a higher level of complexity than autobiographies. In summary, the narrative flow differences suggest that when authors rely on their own memory, retrieval proceeds in a less organized manner likely reflecting spontaneous cueing of associated memories. Our results demonstrate the utility of applying LLMs to narrative text to study cognitive phenomena.