Abstract
Adverse Drug Event (ADE) extraction models can rapidly examine large collections of social media texts, detecting mentions of drug-related adverse reactions and trigger medical investigations. However, despite the recent advances in NLP, it is currently unknown if such models are robust in face of negation, which is pervasive across language varieties. In this paper we evaluate three state-of-the-art systems, showing their fragility against negation, and then we introduce two possible strategies to increase the robustness of these models: a pipeline approach, relying on a specific component for negation detection; an augmentation of an ADE extraction dataset to artificially create negated samples and further train the models. We show that both strategies bring significant increases in performance, lowering the number of spurious entities predicted by the models. Our dataset and code will be publicly released to encourage research on the topic.
Original language | English |
---|---|
Title of host publication | Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021) |
Editors | Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 230–237 |
DOIs | |
Publication status | Published - Nov 2021 |