With the rapid developments in generative AI and large language models (LLMs), researchers are assessing the impacts that these developments bring to domains of scientific studies. In this presentation I describe the “model-brain alignment” approach that leverages LLM's progress. By training encoding models on brain responses to text reading, we evaluate (a) what computational properties in the model are important and (b) what model variations best reflect human individual differences during reading comprehension.
Period
16 Feb 2024
Event title
American Association for the Advancement of Science Annual Meeting: Toward Science without Walls