Abstract
Interpretable artificial intelligence (AI), also known as explainable AI, is indispensable in establishing trustable AI for bench-to-bedside translation, with substantial implications for human well-being. However, the majority of existing research in this area has centered on designing complex and sophisticated methods, regardless of their interpretability. Consequently, the main prerequisite for implementing trustworthy AI in medical domains has not been met. Scientists have developed various explanation methods for interpretable AI. Among these methods, fuzzy rules embedded in a fuzzy inference system (FIS) have emerged as a novel and powerful tool to bridge the communication gap between humans and advanced AI machines. However, there have been few reviews of the use of FISs in medical diagnosis. In addition, the application of fuzzy rules to different kinds of multimodal medical data has received insufficient attention, despite the potential use of fuzzy rules in designing appropriate methodologies for available datasets. This review provides a fundamental understanding of interpretability and fuzzy rules, conducts comparative analyses of the use of fuzzy rules and other explanation methods in handling three major types of multimodal data (i.e., sequence signals, medical images, and tabular data), and offers insights into appropriate fuzzy rule application scenarios and recommendations for future research.
Original language | English |
---|---|
Article number | 120212 |
Journal | Information Sciences |
Volume | 662 |
DOIs | |
Publication status | Published - Mar 2024 |
Keywords
- Disease diagnosis
- Explainable artificial intelligence
- Fuzzy inference system
- Fuzzy rule
- Interpretability
ASJC Scopus subject areas
- Software
- Control and Systems Engineering
- Theoretical Computer Science
- Computer Science Applications
- Information Systems and Management
- Artificial Intelligence