Graph neural networks in vision-language image understanding: a survey

Henry Senior, Gregory Slabaugh, Shanxin Yuan, Luca Rossi

Research output: Journal article publicationReview articleAcademic researchpeer-review


2D image understanding is a complex problem within computer vision, but it holds the key to providing human-level scene comprehension. It goes further than identifying the objects in an image, and instead, it attempts to understand the scene. Solutions to this problem form the underpinning of a range of tasks, including image captioning, visual question answering (VQA), and image retrieval. Graphs provide a natural way to represent the relational arrangement between objects in an image, and thus, in recent years graph neural networks (GNNs) have become a standard component of many 2D image understanding pipelines, becoming a core architectural component, especially in the VQA group of tasks. In this survey, we review this rapidly evolving field and we provide a taxonomy of graph types used in 2D image understanding approaches, a comprehensive list of the GNN models used in this domain, and a roadmap of future potential developments. To the best of our knowledge, this is the first comprehensive survey that covers image captioning, visual question answering, and image retrieval techniques that focus on using GNNs as the main part of their architecture.

Original languageEnglish
Article number01782789
JournalVisual Computer
Publication statusAccepted/In press - 29 Mar 2024


  • Graph neural networks
  • Image captioning
  • Image retrieval
  • Visual question answering

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computer Graphics and Computer-Aided Design


Dive into the research topics of 'Graph neural networks in vision-language image understanding: a survey'. Together they form a unique fingerprint.

Cite this