TY - JOUR
T1 - RingSFL:An Adaptive Split Federated Learning Towards Taming Client Heterogeneity
AU - Shen, Jinlong
AU - Cheng, Nan
AU - Wang, Xiucheng
AU - Lyu, Feng
AU - Xu, Wenchao
AU - Liu, Zhi
AU - Aldubaikhy, Khalid
AU - Shen, Xuemin
PY - 2023/8/30
Y1 - 2023/8/30
N2 - Federated learning (FL) has gained increasing attention due to its ability to collaboratively train while protecting client data privacy. However, vanilla FL cannot adapt to client heterogeneity, leading to a degradation in training efficiency due to stragglers, and is still vulnerable to privacy leakage. To address these issues, this article proposes RingSFL, a novel distributed learning scheme that integrates FL with a model split mechanism to adapt to client heterogeneity while maintaining data privacy. In RingSFL, all clients form a ring topology. For each client, instead of training the model locally, the model is split and trained among all clients along the ring through a pre-defined direction. By properly setting the propagation lengths of heterogeneous clients, the straggler effect is mitigated, and the training efficiency of the system is significantly enhanced. Additionally, since the local models are blended, it is less likely for an eavesdropper to obtain the complete model and recover the raw data, thus improving data privacy. The experimental results on both simulation and prototype systems show that RingSFL can achieve better convergence performance than benchmark methods on independently identically distributed (IID) and non-IID datasets, while effectively preventing eavesdroppers from recovering training data.
AB - Federated learning (FL) has gained increasing attention due to its ability to collaboratively train while protecting client data privacy. However, vanilla FL cannot adapt to client heterogeneity, leading to a degradation in training efficiency due to stragglers, and is still vulnerable to privacy leakage. To address these issues, this article proposes RingSFL, a novel distributed learning scheme that integrates FL with a model split mechanism to adapt to client heterogeneity while maintaining data privacy. In RingSFL, all clients form a ring topology. For each client, instead of training the model locally, the model is split and trained among all clients along the ring through a pre-defined direction. By properly setting the propagation lengths of heterogeneous clients, the straggler effect is mitigated, and the training efficiency of the system is significantly enhanced. Additionally, since the local models are blended, it is less likely for an eavesdropper to obtain the complete model and recover the raw data, thus improving data privacy. The experimental results on both simulation and prototype systems show that RingSFL can achieve better convergence performance than benchmark methods on independently identically distributed (IID) and non-IID datasets, while effectively preventing eavesdroppers from recovering training data.
M3 - Journal article
SN - 1536-1233
VL - 23
SP - 5462
EP - 5478
JO - IEEE Transactions on Mobile Computing
JF - IEEE Transactions on Mobile Computing
IS - 5
ER -