Naturalistic Reading Comprehension in L1 and L2: What can “model-brain alignment” tell us about its neurocognitive mechanisms

Activity: Talk or presentationInvited talk

Description

With the rapid developments in generative AI and large language models (LLMs), researchers are assessing the impacts that these developments bring to various domains of scientific studies. In this talk, I describe the “model-brain alignment” approach that leverages the progress in LLMs. Along with recent proposals on shared computational principles in humans and machines for naturalistic comprehension (e.g., listening to stories, watching movies), we use model-brain alignment to study naturalistic reading comprehension in both native (L1) and non-native (L2) languages. By training LLM-based encoding models on brain responses to text reading, we can evaluate (a) what computational properties in the model are important to reflect human brain mechanisms in language comprehension, and (b) what model variations best reflect human individual differences during reading comprehension. Our findings show that first, to capture the differences in word-level processing vs. high-level discourse integration, current LLM-based models need to incorporate sentence prediction mechanisms on top of word prediction, and second, variations in model-brain alignment allow us to predict L1 and L2 readers’ sensitivity to text properties, cognitive demand characteristics, and ultimately their reading performance. Overall, our work highlights the utility of the model-brain alignment approach in the study of naturalistic reading comprehension at multiple levels of cognitive processing and multiple dimensions of individual variation.

Period28 Mar 2024
Held atUniversity of Connecticut, United States
Degree of RecognitionInternational