Skip to main navigation Skip to search Skip to main content

From visual question answering to intelligent AI agents in ophthalmology

Research output: Journal article publicationReview articleAcademic researchpeer-review

Abstract

Ophthalmic practice involves the integration of diverse clinical data and interactive decision-making, posing challenges for traditional artificial intelligence (AI) systems. Visual question answering (VQA) addresses this by combining computer vision and natural language processing to interpret medical images through user-driven queries. Evolving from VQA, multimodal AI agents enable continuous dialogue, tool use and context-aware clinical decision support. This review explores recent developments in ophthalmic conversational AI, spanning theoretical advances and practical implementations. We highlight the transformative role of large language models (LLMs) in improving reasoning, adaptability and task execution. However, key obstacles remain, including limited multimodal datasets, absence of standardised evaluation protocols, and challenges in clinical integration. We outline these limitations and propose future research directions to support the development of robust, LLM-driven AI systems. Realising their full potential will depend on close collaboration between AI researchers and the ophthalmic community.
Original languageEnglish
Pages (from-to)1-7
Number of pages7
JournalBritish Journal of Ophthalmology
Volume110
Issue number1
DOIs
Publication statusPublished - 1 Jan 2026

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Keywords

  • Imaging
  • Medical Education
  • Public health
  • Telemedicine

ASJC Scopus subject areas

  • Ophthalmology
  • Sensory Systems
  • Cellular and Molecular Neuroscience

Fingerprint

Dive into the research topics of 'From visual question answering to intelligent AI agents in ophthalmology'. Together they form a unique fingerprint.

Cite this