TY - GEN
T1 - Comparing Probabilistic, Distributional and Transformer-Based Models on Logical Metonymy Interpretation
AU - Rambelli, Giulia
AU - Chersoni, Emmanuele
AU - Lenci, Alessandro
AU - Blache, Philippe
AU - Huang, Chu-Ren
N1 - Publisher Copyright:
© 2020 Association for Computational Linguistics.
PY - 2020/12
Y1 - 2020/12
N2 - In linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entity-denoting noun (e.g. The editor finished the article), which are typically interpreted by inferring a hidden event (e.g. reading) on the basis of contextual cues. This paper tackles the problem of logical metonymy interpretation, that is, the retrieval of the covert event via computational methods. We compare different types of models, including the probabilistic and the distributional ones previously introduced in the literature on the topic. For the first time, we also tested on this task some of the recent Transformer-based models, such as BERT, RoBERTa, XLNet, and GPT-2. Our results show a complex scenario, in which the best Transformer-based models and some traditional distributional models perform very similarly. However, the low performance on some of the testing datasets suggests that logical metonymy is still a challenging phenomenon for computational modeling.
AB - In linguistics and cognitive science, Logical metonymies are defined as type clashes between an event-selecting verb and an entity-denoting noun (e.g. The editor finished the article), which are typically interpreted by inferring a hidden event (e.g. reading) on the basis of contextual cues. This paper tackles the problem of logical metonymy interpretation, that is, the retrieval of the covert event via computational methods. We compare different types of models, including the probabilistic and the distributional ones previously introduced in the literature on the topic. For the first time, we also tested on this task some of the recent Transformer-based models, such as BERT, RoBERTa, XLNet, and GPT-2. Our results show a complex scenario, in which the best Transformer-based models and some traditional distributional models perform very similarly. However, the low performance on some of the testing datasets suggests that logical metonymy is still a challenging phenomenon for computational modeling.
UR - https://www.scopus.com/pages/publications/105027105307
U2 - 10.18653/v1/2020.aacl-main.26
DO - 10.18653/v1/2020.aacl-main.26
M3 - Conference article published in proceeding or book
AN - SCOPUS:105027105307
T3 - Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, AACL-IJCNLP 2020
SP - 224
EP - 234
BT - Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, AACL-IJCNLP 2020
A2 - Wong, Kam-Fai
A2 - Knight, Kevin
A2 - Wu, Hua
PB - Association for Computational Linguistics (ACL)
T2 - 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, AACL-IJCNLP 2020
Y2 - 4 December 2020 through 7 December 2020
ER -