Harnessing Sparsification in Federated Learning: A Secure, Efficient, and Differentially Private Realization

Research output: Chapter in book / Conference proceedingConference article published in proceeding or bookAcademic researchpeer-review

Abstract

Federated learning (FL) enables multiple clients to jointly train a model by sharing only gradient updates for aggregation instead of raw data. Due to the transmission of very high-dimensional gradient updates from many clients, FL is known to suffer from a communication bottleneck. Meanwhile, the gradients shared by clients as well as the trained model may also be exploited for inferring private local datasets, making privacy still a critical concern in FL. We present Clover, a novel system framework for communication-efficient, secure, and differentially private FL. To tackle the communication bottleneck in FL, Clover follows a standard and commonly used approach-top-k gradient sparsification, where each client sparsifies its gradient update such that only k largest gradients (measured by magnitude) are preserved for aggregation. Clover provides a tailored mechanism built out of a trending distributed trust setting involving three servers, which allows to efficiently aggregate multiple sparse vectors (top-k sparsified gradient updates) into a dense vector while hiding the values and indices of non-zero elements in each sparse vector. This mechanism outperforms a baseline built on the general distributed ORAM technique by several orders of magnitude in server-side communication and runtime, with also smaller client communication cost. We further integrate this mechanism with a lightweight distributed noise generation mechanism to offer differential privacy (DP) guarantees on the trained model. To harden Clover with security against a malicious server, we devise a series of lightweight mechanisms for integrity checks on the server-side computation. Extensive experiments show that Clover can achieve utility comparable to vanilla FL with central DP and no use of top-k sparsification. Meanwhile, achieving malicious security introduces negligible overhead in client-server communication, and only modest overhead in server-side communication and runtime, compared to the semi-honest security counterpart.

Original languageEnglish
Title of host publicationCCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery, Inc
Pages2354-2368
Number of pages15
ISBN (Electronic)9798400715259
DOIs
Publication statusPublished - 22 Nov 2025
Event32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025 - Taipei, Taiwan
Duration: 13 Oct 202517 Oct 2025

Publication series

NameCCS 2025 - Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security

Conference

Conference32nd ACM SIGSAC Conference on Computer and Communications Security, CCS 2025
Country/TerritoryTaiwan
CityTaipei
Period13/10/2517/10/25

Keywords

  • differential privacy
  • federated learning
  • secret sharing

ASJC Scopus subject areas

  • Software
  • Computer Networks and Communications
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'Harnessing Sparsification in Federated Learning: A Secure, Efficient, and Differentially Private Realization'. Together they form a unique fingerprint.

Cite this