Abstract
visually grounded language understanding. This benchmark includes a visual question answering (VQA) dataset with text-image pairs across 30 languages and dialects, spanning 9 language families and featuring over 1 million data points, making it the largest multicultural VQA benchmark to date. It includes tasks for identifying dish names and their origins. We provide evaluation datasets in two sizes (12k and 60k instances) alongside a training dataset (1 million instances). Our findings show that while VLMs perform better with correct location context, they struggle with adversarial contexts and predicting specific regional cuisines and languages. To support future research, we release a knowledge base with annotated food entries and images along with the VQA data.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) |
Editors | Luis Chiruzzo, Alan Ritter, Lu Wang |
Publisher | Association for Computational Linguistics |
Pages | 3242-3264 |
ISBN (Electronic) | 9798891761896 |
Publication status | Published - Apr 2025 |
Event | 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics - Albuquerque Convention Center, Albuquerque, United States Duration: 29 Apr 2025 → 4 May 2025 https://2025.naacl.org/ |
Conference
Conference | 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics |
---|---|
Abbreviated title | NAACL 2025 |
Country/Territory | United States |
City | Albuquerque |
Period | 29/04/25 → 4/05/25 |
Internet address |