TY - GEN
T1 - LDP-Purifier: Defending against Poisoning Attacks in Local Differential Privacy
AU - Wang, Leixia
AU - Ye, Qingqing
AU - Hu, Haibo
AU - Meng, Xiaofeng
AU - Huang, Kai
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024/7
Y1 - 2024/7
N2 - Local differential privacy provides strong user privacy protection but is vulnerable to poisoning attacks launched by malicious users, leading to contaminative estimates. Although various works explore attacks with different manipulation targets, a practical and relatively general defense has remained elusive. In this paper, we address this problem in basic histogram estimation scenarios. We model adversaries as Byzantine users who can collaborate to maximize their attack goals. From the perspective of attackers’ capability, we analyze the impact of poisoning attacks on data utility and introduce a significant threat — the maximal loss attack (MLA). Considering that a high-utility-damage attack would break the smoothness of histograms, we propose the defense method, LDP-Purifier, to sterilize the poisoned histograms. Our extensive experiments validate the effectiveness of the LDP-Purifier, showcasing its ability to significantly suppress estimation errors caused by various attacks.
AB - Local differential privacy provides strong user privacy protection but is vulnerable to poisoning attacks launched by malicious users, leading to contaminative estimates. Although various works explore attacks with different manipulation targets, a practical and relatively general defense has remained elusive. In this paper, we address this problem in basic histogram estimation scenarios. We model adversaries as Byzantine users who can collaborate to maximize their attack goals. From the perspective of attackers’ capability, we analyze the impact of poisoning attacks on data utility and introduce a significant threat — the maximal loss attack (MLA). Considering that a high-utility-damage attack would break the smoothness of histograms, we propose the defense method, LDP-Purifier, to sterilize the poisoned histograms. Our extensive experiments validate the effectiveness of the LDP-Purifier, showcasing its ability to significantly suppress estimation errors caused by various attacks.
KW - Histogram estimation
KW - LDP
KW - Poisoning attack
UR - http://www.scopus.com/inward/record.url?scp=85209589971&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-5562-2_14
DO - 10.1007/978-981-97-5562-2_14
M3 - Conference article published in proceeding or book
AN - SCOPUS:85209589971
SN - 9789819755615
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 221
EP - 231
BT - Database Systems for Advanced Applications - 29th International Conference, DASFAA 2024, Proceedings
A2 - Onizuka, Makoto
A2 - Lee, Jae-Gil
A2 - Tong, Yongxin
A2 - Xiao, Chuan
A2 - Ishikawa, Yoshiharu
A2 - Lu, Kejing
A2 - Amer-Yahia, Sihem
A2 - Jagadish, H.V.
PB - Springer Science and Business Media Deutschland GmbH
T2 - 29th International Conference on Database Systems for Advanced Applications, DASFAA 2024
Y2 - 2 July 2024 through 5 July 2024
ER -