TY - GEN
T1 - CGP: Centroid-guided Graph Poisoning for Link Inference Attacks in Graph Neural Networks
AU - Tian, Haozhe
AU - Hu, Haibo
AU - Ye, Qingqing
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2024/1
Y1 - 2024/1
N2 - Graph Neural Network (GNN) is the state-of-the-art machine learning model on graph data, which many modern big data applications rely on. However, GNN's potential leakage of sensitive graph node relationships (i.e., links) could cause severe user privacy infringements. An attacker might infer the sensitive graph links from the posteriors of a GNN. Such attacks are named graph link inference attacks. While most existing research considers attack settings without malicious users, this work considers the setting where some malicious nodes are established by the attacker. This setting enables link inference without relying on the estimation of the number of links in the target graph, which significantly enhances the practicality of link inference attacks. This work further proposes centroid-guided graph poisoning (CGP). Without participating in the training process of the target model, CGP operates on links between malicious nodes to make the target model more vulnerable to graph link inference attacks. Experiment results in this work demonstrate that using less than 5% of malicious nodes, i.e. modifying approximately 0.25% of all links, CGP can increase the F-1 of graph link inference attacks by up to 4%.
AB - Graph Neural Network (GNN) is the state-of-the-art machine learning model on graph data, which many modern big data applications rely on. However, GNN's potential leakage of sensitive graph node relationships (i.e., links) could cause severe user privacy infringements. An attacker might infer the sensitive graph links from the posteriors of a GNN. Such attacks are named graph link inference attacks. While most existing research considers attack settings without malicious users, this work considers the setting where some malicious nodes are established by the attacker. This setting enables link inference without relying on the estimation of the number of links in the target graph, which significantly enhances the practicality of link inference attacks. This work further proposes centroid-guided graph poisoning (CGP). Without participating in the training process of the target model, CGP operates on links between malicious nodes to make the target model more vulnerable to graph link inference attacks. Experiment results in this work demonstrate that using less than 5% of malicious nodes, i.e. modifying approximately 0.25% of all links, CGP can increase the F-1 of graph link inference attacks by up to 4%.
KW - Centroid-guided graph poisoning
KW - graph link inference attacks
KW - graph neural networks
UR - http://www.scopus.com/inward/record.url?scp=85184978527&partnerID=8YFLogxK
U2 - 10.1109/BigData59044.2023.10386501
DO - 10.1109/BigData59044.2023.10386501
M3 - Conference article published in proceeding or book
AN - SCOPUS:85184978527
T3 - Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023
SP - 554
EP - 561
BT - Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023
A2 - He, Jingrui
A2 - Palpanas, Themis
A2 - Hu, Xiaohua
A2 - Cuzzocrea, Alfredo
A2 - Dou, Dejing
A2 - Slezak, Dominik
A2 - Wang, Wei
A2 - Gruca, Aleksandra
A2 - Lin, Jerry Chun-Wei
A2 - Agrawal, Rakesh
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Big Data, BigData 2023
Y2 - 15 December 2023 through 18 December 2023
ER -