ChatFFA: An ophthalmic chat system for unified vision-language understanding and question answering for fundus fluorescein angiography

Xiaolan Chen, Pusheng Xu, Yao Li, Weiyi Zhang, Fan Song, Mingguang He, Danli Shi (Corresponding Author)

Research output: Journal article publicationJournal articleAcademic researchpeer-review

6 Citations (Scopus)

Abstract

Existing automatic analysis of fundus fluorescein angiography (FFA) images faces limitations, including a predetermined set of possible image classifications and being confined to text-based question-answering (QA) approaches. This study aims to address these limitations by developing an end-to-end unified model that utilizes synthetic data to train a visual question-answering model for FFA images. To achieve this, we employed ChatGPT to generate 4,110,581 QA pairs for a large FFA dataset, which encompassed a total of 654,343 FFA images from 9,392 participants. We then fine-tuned the Bootstrapping Language-Image Pre-training (BLIP) framework to enable simultaneous handling of vision and language. The performance of the fine-tuned model (ChatFFA) was thoroughly evaluated through automated and manual assessments, as well as case studies based on an external validation set, demonstrating satisfactory results. In conclusion, our ChatFFA system paves the way for improved efficiency and feasibility in medical imaging analysis by leveraging generative large language models.
Original languageEnglish
Article number110021
Pages (from-to)1-11
Number of pages11
JournaliScience
Volume27
Issue number7
DOIs
Publication statusE-pub ahead of print - 17 May 2024

Keywords

  • Artificial intelligence
  • Ophthalmology

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'ChatFFA: An ophthalmic chat system for unified vision-language understanding and question answering for fundus fluorescein angiography'. Together they form a unique fingerprint.

Cite this