Predicting the next sentence (not word) in large language models: What model-brain alignment tells us about discourse comprehension

Shaoyun Yu (Corresponding Author), Chanyuan Gu, Kexin Huang, Ping Li (Corresponding Author)

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

Current large language models (LLMs) rely on word prediction as their backbone pretraining task. Although word prediction is an important mechanism underlying language processing, human language comprehension occurs at multiple levels, involving the integration of words and sentences to achieve a full understanding of discourse. This study models language comprehension by using the next sentence prediction (NSP) task to investigate mechanisms of discourse-level comprehension. We show that NSP pretraining enhanced a model’s alignment with brain data especially in the right hemisphere and in the multiple demand network, highlighting the contributions of nonclassical language regions to high-level language understanding. Our results also suggest that NSP can enable the model to better capture human comprehension performance and to better encode contextual information. Our study demonstrates that the inclusion of diverse learning objectives in a model leads to more human-like representations, and investigating the neurocognitive plausibility of pretraining tasks in LLMs can shed light on outstanding questions in language neuroscience. Large language models (LLMs) can align better with human brains by learning beyond word prediction in discourse-level pretraining.
Original languageEnglish
Article numberadn7744
JournalScience advances
Volume10
Issue number21
DOIs
Publication statusPublished - 24 May 2024

Fingerprint

Dive into the research topics of 'Predicting the next sentence (not word) in large language models: What model-brain alignment tells us about discourse comprehension'. Together they form a unique fingerprint.

Cite this