FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-Based Node Classification

Yulin Zhu, Tong Liang, Gaolei Li, Xiapu Luo, Kai Zhou

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Graph Neural Networks (GNNs) are vulnerable to data poisoning attacks, which will generate a poisoned graph as the input to the GNN models. We present FocusedCleaner as a poisoned graph sanitizer to effectively identify the poison injected by attackers. Specifically, FocusedCleaner provides a sanitation framework consisting of two modules: bi-level structural learning and victim node detection. In particular, the structural learning module will reverse the attack process to steadily sanitize the graph while the detection module provides the “focus” – a narrowed and more accurate search region – to structural learning. These two modules will operate in iterations and reinforce each other to sanitize a poisoned graph step by step. As an important application, we show that the adversarial robustness of GNNs trained over the sanitized graph for the node classification task is significantly improved. Extensive experiments demonstrate that FocusedCleaner outperforms the state-of-the-art baselines both on poisoned graph sanitation and improving robustness.
Original languageEnglish
Pages (from-to)2476 - 2489
Number of pages14
JournalIEEE Transactions on Knowledge and Data Engineering
Volume36
Issue number6
DOIs
Publication statusPublished - 1 Jun 2024

Keywords

  • Graph learning and mining
  • discrete optimization
  • graph adversarial robustness
  • victim node detection

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-Based Node Classification'. Together they form a unique fingerprint.

Cite this