A study on surprisal and semantic relatedness for eye-tracking data prediction

Lavinia Salicchi, Emmanuele Chersoni, Alessandro Lenci

Research output: Journal article publicationJournal articleAcademic researchpeer-review

2 Citations (Scopus)


Previous research in computational linguistics dedicated a lot of effort to using language modeling and/or distributional semantic models to predict metrics extracted from eye-tracking data. However, it is not clear whether the two components have a distinct contribution, with recent studies claiming that surprisal scores estimated with large-scale, deep learning-based language models subsume the semantic relatedness component. In our study, we propose a regression experiment for estimating different eye-tracking metrics on two English corpora, contrasting the quality of the predictions with and without the surprisal and the relatedness components. Different types of relatedness scores derived from both static and contextual models have also been tested. Our results suggest that both components play a role in the prediction, with semantic relatedness surprisingly contributing also to the prediction of function words. Moreover, they show that when the metric is computed with the contextual embeddings of the BERT model, it is able to explain a higher amount of variance.
Original languageEnglish
Article number1112365
JournalFrontiers in Psychology
Publication statusPublished - 2 Feb 2023


  • cognitive modeling
  • surprisal
  • semantic relatedness
  • cosine similarity
  • language models
  • distributional semantics
  • eye-tracking


Dive into the research topics of 'A study on surprisal and semantic relatedness for eye-tracking data prediction'. Together they form a unique fingerprint.

Cite this