Skip to main content
eScholarship
Open Access Publications from the University of California

UC Berkeley

UC Berkeley Electronic Theses and Dissertations bannerUC Berkeley

Structured Contexts For Large Language Models

Abstract

Large language models (LLMs) are as capable as the context that they are given. This dissertation studies how to structure context, improving LLMs by orchestrating and engineering the contexts that they are given. First, we present context decomposition, a technique for breaking complex contexts into simpler contexts that specialized models are more capableof handling. Second, we show in a comparative study that, context rewriting, a method for re-representing conversational utterances into simpler contexts improves data labeling efficiency and modularity. Finally, we present context tuning, a technique to finetune LLMs to better handle input contexts.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View