Abstract
Ophthalmic practice involves the integration of diverse clinical data and interactive decision-making, posing challenges for traditional artificial intelligence (AI) systems. Visual question answering (VQA) addresses this by combining computer vision and natural language processing to interpret medical images through user-driven queries. Evolving from VQA, multimodal AI agents enable continuous dialogue, tool use and context-aware clinical decision support. This review explores recent developments in ophthalmic conversational AI, spanning theoretical advances and practical implementations. We highlight the transformative role of large language models (LLMs) in improving reasoning, adaptability and task execution. However, key obstacles remain, including limited multimodal datasets, absence of standardised evaluation protocols, and challenges in clinical integration. We outline these limitations and propose future research directions to support the development of robust, LLM-driven AI systems. Realising their full potential will depend on close collaboration between AI researchers and the ophthalmic community.
| Original language | English |
|---|---|
| Pages (from-to) | 1-7 |
| Number of pages | 7 |
| Journal | British Journal of Ophthalmology |
| Volume | 110 |
| Issue number | 1 |
| DOIs | |
| Publication status | Published - 1 Jan 2026 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- Imaging
- Medical Education
- Public health
- Telemedicine
ASJC Scopus subject areas
- Ophthalmology
- Sensory Systems
- Cellular and Molecular Neuroscience
Fingerprint
Dive into the research topics of 'From visual question answering to intelligent AI agents in ophthalmology'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver