MUD-PQFed: Towards Malicious User Detection on model corruption in Privacy-preserving Quantized Federated learning

Hua Ma, Qun Li, Yifeng Zheng, Zhi Zhang, Xiaoning Liu, Yansong Gao, Said F. Al-Sarawi, Derek Abbott

Research output: Journal article publicationJournal articleAcademic researchpeer-review

3 Citations (Scopus)

Abstract

The use of cryptographic privacy-preserving techniques in Federated Learning (FL) inadvertently induces a security dilemma because tampered local model parameters are encrypted and thus prevented from auditing. This work firstly demonstrates the triviality of performing model corruption attacks against privacy-preserving FL. We consider the scenario where the model updates are quantized to reduce the communication overhead, whilst the adversary can simply provide local parameters out of a legitimate range to corrupt the model. We then propose MUD-PQFed, a protocol that can precisely detect malicious attacks and enforce fair penalties on malicious clients. By deleting the contributions from the detected malicious clients, the global model utility is preserved as compared to the baseline global model in the absence of the corruption attack. Extensive experiments on MNIST, CIFAR-10, and CelebA benchmark datasets validate the efficacy in terms of retaining the baseline accuracy and effectiveness in terms of detecting malicious clients in a fine-grained manner.

Original languageEnglish
Article number103406
JournalComputers and Security
Volume133
DOIs
Publication statusPublished - Oct 2023

Keywords

  • Federated learning
  • Model corruption
  • Model poisoning attack
  • Privacy-preserving
  • Quantization

ASJC Scopus subject areas

  • General Computer Science
  • Law

Fingerprint

Dive into the research topics of 'MUD-PQFed: Towards Malicious User Detection on model corruption in Privacy-preserving Quantized Federated learning'. Together they form a unique fingerprint.

Cite this