RingSFL:An Adaptive Split Federated Learning Towards Taming Client Heterogeneity

Jinlong Shen, Nan Cheng, Xiucheng Wang, Feng Lyu, Wenchao Xu, Zhi Liu, Khalid Aldubaikhy, Xuemin Shen

Research output: Journal article publicationJournal articleAcademic researchpeer-review

Abstract

Federated learning (FL) has gained increasing attention due to its ability to collaboratively train while protecting client data privacy. However, vanilla FL cannot adapt to client heterogeneity, leading to a degradation in training efficiency due to stragglers, and is still vulnerable to privacy leakage. To address these issues, this article proposes RingSFL, a novel distributed learning scheme that integrates FL with a model split mechanism to adapt to client heterogeneity while maintaining data privacy. In RingSFL, all clients form a ring topology. For each client, instead of training the model locally, the model is split and trained among all clients along the ring through a pre-defined direction. By properly setting the propagation lengths of heterogeneous clients, the straggler effect is mitigated, and the training efficiency of the system is significantly enhanced. Additionally, since the local models are blended, it is less likely for an eavesdropper to obtain the complete model and recover the raw data, thus improving data privacy. The experimental results on both simulation and prototype systems show that RingSFL can achieve better convergence performance than benchmark methods on independently identically distributed (IID) and non-IID datasets, while effectively preventing eavesdroppers from recovering training data.
Original languageEnglish
Pages (from-to)5462 - 5478
JournalIEEE Transactions on Mobile Computing
Volume23
Issue number5
Publication statusPublished - 30 Aug 2023

Fingerprint

Dive into the research topics of 'RingSFL:An Adaptive Split Federated Learning Towards Taming Client Heterogeneity'. Together they form a unique fingerprint.

Cite this