Abstract
Numbers are notoriously an essential component of financial texts, and their correct understanding is key to automatic system for efficiently extracting and processing information.
In our paper, we analyze the embeddings of different BERT-based models, by testing them on supervised and unsupervised probing tasks for financial numeral understanding and value ordering.
Our results show that LMs with different types of training have complementary strengths, thus suggesting that their embeddings should be combined for more stable performances across tasks and categories.
In our paper, we analyze the embeddings of different BERT-based models, by testing them on supervised and unsupervised probing tasks for financial numeral understanding and value ordering.
Our results show that LMs with different types of training have complementary strengths, thus suggesting that their embeddings should be combined for more stable performances across tasks and categories.
Original language | English |
---|---|
Title of host publication | Proceedings of the Joint Workshop of the IJCAI Financial Technology and Natural Language Processing (FinNLP) and the 1st Agent AI for Scenario Planning (AgentScen) |
Editors | Chung-Chi Chen, Tatsuya Ishigaki, Hiroya Takamura, Akihiko Murai, Suzuko Nishino, Hen-Hsen Huang, Hsin-Hsi Chen |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 73-78 |
Publication status | Published - Aug 2024 |
Event | Joint Workshop of the 8th Financial Technology and Natural Language Processing (FinNLP) and the 1st Agent AI for Scenario Planning (AgentScen): FinNLP-AgentScen - Jeju Convention Center, Jeju Island, Korea, Republic of Duration: 3 Aug 2024 → 3 Aug 2024 |
Conference
Conference | Joint Workshop of the 8th Financial Technology and Natural Language Processing (FinNLP) and the 1st Agent AI for Scenario Planning (AgentScen) |
---|---|
Country/Territory | Korea, Republic of |
City | Jeju Island |
Period | 3/08/24 → 3/08/24 |