Abstract
Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models. We present FocusedCleaner as a poisoned graph sanitizer to effectively identify the poison injected by attackers. Specifically, FocusedCleaner provides a sanitation framework consisting of two modules: bi-level structural learning and victim node detection. In particular, the structural learning module will reverse the attack process to steadily sanitize the graph while the detection module provides the “focus” – a narrowed and more accurate search region – to structural learning. These two modules will operate in iterations and reinforce each other to sanitize a poisoned graph step by step. As an important application, we show that the adversarial robustness of GNNs trained over the sanitized graph for the node classification task is significantly improved. Extensive experiments demonstrate that FocusedCleaner outperforms the state-of-the-art baselines both on poisoned graph sanitation and improving robustness.
Original language | English |
---|---|
Pages (from-to) | 2476 - 2489 |
Number of pages | 14 |
Journal | IEEE Transactions on Knowledge and Data Engineering |
Volume | 36 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Jun 2024 |
Keywords
- Graph learning and mining
- discrete optimization
- graph adversarial robustness
- victim node detection
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Computational Theory and Mathematics