AILAB-Udine@SMM4H’22: Limits of Transformers and BERT Ensembles

Beatrice Portelli, Simone Scaboro, Emmanuele Chersoni, Enrico Santus, Giuseppe Serra

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review


This paper describes the models developed by the AILAB-Udine team for the SMM4H’22 Shared Task. We explored the limits of Transformer based models on text classification, entity extraction and entity normalization, tackling Tasks 1, 2, 5, 6 and 10. The main takeaways we got from participating in different tasks are: the overwhelming positive effects of combining different architectures when using ensemble learning, and the great potential of generative models for term normalization.
Original languageEnglish
Title of host publicationProceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task (SMM4H 2022)
EditorsGraciela Gonzalez-Hernandez, Davy Weissenbacher
Publication statusPublished - Oct 2022
EventThe 29th International Conference on Computational Linguistics - Gyeongju, Korea, Democratic People's Republic of
Duration: 12 Oct 202217 Oct 2022


ConferenceThe 29th International Conference on Computational Linguistics
Abbreviated titleCOLING2022
Country/TerritoryKorea, Democratic People's Republic of
Internet address


Dive into the research topics of 'AILAB-Udine@SMM4H’22: Limits of Transformers and BERT Ensembles'. Together they form a unique fingerprint.

Cite this