Abstract
Large Language Models (LLMs) are increasingly used in tasks requiring interpretive and inferential accuracy. In this paper, we introduce ExpliCa, a new dataset for evaluating LLMs in explicit causal reasoning. ExpliCa uniquely integrates both causal and temporal relations presented in different linguistic orders and explicitly expressed by linguistic connectives. The dataset is enriched with crowdsourced human acceptability ratings. We tested LLMs on ExpliCa through prompting and perplexity-based metrics. We assessed seven commercial and open-source LLMs, revealing that even top models struggle to reach 0.80 accuracy. Interestingly, models tend to confound temporal relations with causal ones, and their performance is also strongly influenced by the linguistic order of the events. Finally, perplexity-based scores and prompting performance are differently affected by model size.
| Original language | English |
|---|---|
| Title of host publication | Findings of the Association for Computational Linguistics: ACL 2025 |
| Editors | Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar |
| Publisher | Association for Computational Linguistics |
| Pages | 17335-17355 |
| ISBN (Electronic) | 9798891762565 |
| DOIs | |
| Publication status | Published - Jul 2025 |
| Event | The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) - Vienna, Austria Duration: 27 Jul 2025 → 1 Aug 2025 |
Conference
| Conference | The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025) |
|---|---|
| Country/Territory | Austria |
| City | Vienna |
| Period | 27/07/25 → 1/08/25 |