Camel: Communication-Efficient and Maliciously Secure Federated Learning in the Shuffle Model of Differential Privacy

Shuangqing Xu, Yifeng Zheng, Zhongyun Hua

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Federated learning (FL) has rapidly become a compelling paradigm that enables multiple clients to jointly train a model by sharing only gradient updates for aggregation, without revealing their local private data. In order to protect the gradient updates which could also be privacy-sensitive, there has been a line of work studying local differential privacy (LDP) mechanisms to provide a formal privacy guarantee. With LDP mechanisms, clients locally perturb their gradient updates before sharing them out for aggregation. However, such approaches are known for greatly degrading the model utility, due to heavy noise addition. To enable a better privacy-utility tradeoff, a recently emerging trend is to apply the shuffle model of DP in FL, which relies on an intermediate shuffling operation on the perturbed gradient updates to achieve privacy amplification. Following this trend, in this paper, we present Camel, a new communication-efficient and maliciously secure FL framework in the shuffle model of DP. Camel first departs from existing works by ambitiously supporting integrity check for the shuffle computation, achieving security against malicious adversary. Specifically, Camel builds on the trending cryptographic primitive of secret-shared shuffle, with custom techniques we develop for optimizing system-wide communication efficiency, and for lightweight integrity checks to harden the security of server-side computation. In addition, we also derive a significantly tighter bound on the privacy loss through analyzing the Rényi differential privacy (RDP) of the overall FL process. Extensive experiments demonstrate that Camel achieves better privacy-utility trade-offs than the state-of-the-art work, with promising performance.

Original languageEnglish
Title of host publicationCCS 2024 - Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery, Inc
Pages243-257
Number of pages15
ISBN (Electronic)9798400706363
DOIs
Publication statusPublished - 9 Dec 2024
Event31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024 - Salt Lake City, United States
Duration: 14 Oct 202418 Oct 2024

Publication series

NameCCS 2024 - Proceedings of the 2024 ACM SIGSAC Conference on Computer and Communications Security

Conference

Conference31st ACM SIGSAC Conference on Computer and Communications Security, CCS 2024
Country/TerritoryUnited States
CitySalt Lake City
Period14/10/2418/10/24

Keywords

  • differential privacy
  • federated learning
  • secret sharing

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications
  • Software

Fingerprint

Dive into the research topics of 'Camel: Communication-Efficient and Maliciously Secure Federated Learning in the Shuffle Model of Differential Privacy'. Together they form a unique fingerprint.

Cite this