TY - GEN
T1 - LESI-GNN: An Interpretable Graph Neural Network Based on Local Structures Embedding
AU - Minello, Giorgia
AU - Zhang, Lingfeng
AU - Bicciato, Alessandro
AU - Rossi, Luca
AU - Torsello, Andrea
AU - Cosmo, Luca
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/1
Y1 - 2025/1
N2 - In recent years, deep learning researchers have been increasingly interested in developing architectures able to operate on data abstracted as graphs, i.e., Graph Neural Networks (GNNs). At the same time, there has been a surge in the number of commercial AI systems deployed for real-world applications. At their core, the majority of these systems are based on black-box deep learning models, such as GNNs, greatly limiting their accountability and trustworthiness. The idea underpinning this paper is to exploit the representational power of graph variational autoencoders to learn an embedding space where a “convolution” between local structures and latent vectors can take place. The key intuition is that this embedding space can then be used to decode the learned latent vectors into more interpretable latent structures. Our experiments validate the performance of our model against widely used alternatives on standard graph benchmarks, while also showing the ability to probe the model decisions by visualising the learned structural patterns.
AB - In recent years, deep learning researchers have been increasingly interested in developing architectures able to operate on data abstracted as graphs, i.e., Graph Neural Networks (GNNs). At the same time, there has been a surge in the number of commercial AI systems deployed for real-world applications. At their core, the majority of these systems are based on black-box deep learning models, such as GNNs, greatly limiting their accountability and trustworthiness. The idea underpinning this paper is to exploit the representational power of graph variational autoencoders to learn an embedding space where a “convolution” between local structures and latent vectors can take place. The key intuition is that this embedding space can then be used to decode the learned latent vectors into more interpretable latent structures. Our experiments validate the performance of our model against widely used alternatives on standard graph benchmarks, while also showing the ability to probe the model decisions by visualising the learned structural patterns.
KW - Autoencoder
KW - Graph Neural Network
KW - Interpretability
UR - http://www.scopus.com/inward/record.url?scp=85219200647&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-80507-3_8
DO - 10.1007/978-3-031-80507-3_8
M3 - Conference article published in proceeding or book
AN - SCOPUS:85219200647
SN - 9783031805066
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 72
EP - 81
BT - Structural, Syntactic, and Statistical Pattern Recognition - Joint IAPR International Workshops, S+SSPR 2024, Revised Selected Papers
A2 - Torsello, Andrea
A2 - Rossi, Luca
A2 - Cosmo, Luca
A2 - Minello, Giorgia
PB - Springer Science and Business Media Deutschland GmbH
T2 - Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pattern Recognition, S+SSPR 2024
Y2 - 9 September 2024 through 10 September 2024
ER -