Language Models That Accurately Represent Syntactic Structure Exhibit Higher Representational Similarity To Brain Activity
Skip to main content
eScholarship
Open Access Publications from the University of California

Language Models That Accurately Represent Syntactic Structure Exhibit Higher Representational Similarity To Brain Activity

Creative Commons 'BY' version 4.0 license
Abstract

We investigate whether more accurate representation of syntactic information in Transformer-based language models is associated with better alignment to brain activity. We use fMRI recordings from a large dataset (MOUS) of a Dutch sentence reading task, and perform Representational Similarity Analysis to measure alignment with 2 mono- and 3 multilingual language models. We focus on activity in a region known for syntactic processing (the Left posterior Medial Temporal Gyrus). We correlate model-brain similarity scores with the accuracy of dependency structures extracted from model internal states using a labelled structural probe. We report three key findings: 1) Accuracy of syntactic dependency representations correlates with brain similarity, 2) The link between brain similarity and dependency accuracy persists regardless of sentence complexity, although 3) Sentence complexity decreases dependency accuracy while increasing brain similarity. These results highlight how interpretable, linguistic features such as syntactic dependencies can mediate the similarity between language models and brains

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View