Abstract
The use of cryptographic privacy-preserving techniques in Federated Learning (FL) inadvertently induces a security dilemma because tampered local model parameters are encrypted and thus prevented from auditing. This work firstly demonstrates the triviality of performing model corruption attacks against privacy-preserving FL. We consider the scenario where the model updates are quantized to reduce the communication overhead, whilst the adversary can simply provide local parameters out of a legitimate range to corrupt the model. We then propose MUD-PQFed, a protocol that can precisely detect malicious attacks and enforce fair penalties on malicious clients. By deleting the contributions from the detected malicious clients, the global model utility is preserved as compared to the baseline global model in the absence of the corruption attack. Extensive experiments on MNIST, CIFAR-10, and CelebA benchmark datasets validate the efficacy in terms of retaining the baseline accuracy and effectiveness in terms of detecting malicious clients in a fine-grained manner.
Original language | English |
---|---|
Article number | 103406 |
Journal | Computers and Security |
Volume | 133 |
DOIs | |
Publication status | Published - Oct 2023 |
Keywords
- Federated learning
- Model corruption
- Model poisoning attack
- Privacy-preserving
- Quantization
ASJC Scopus subject areas
- General Computer Science
- Law