TY - GEN
T1 - Asynchronous Semi-Decentralized Federated Edge Learning for Heterogeneous Clients
AU - Sun, Yuchang
AU - Shao, Jiawei
AU - Mao, Yuyi
AU - Zhang, Jun
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/8
Y1 - 2022/8
N2 - Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks. In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training. Despite the low training latency enabled by fast edge aggregation, the device heterogeneity in computational resources deteriorates the efficiency. This paper proposes an asynchronous training algorithm to overcome this issue in SD-FEEL, where edge servers are allowed to independently set deadlines for the associated client nodes and trigger the model aggregation. To deal with different levels of model staleness, we design a staleness-aware aggregation scheme and analyze its convergence. Simulation results demonstrate the effectiveness of our proposed algorithm in achieving faster convergence and better learning performance than synchronous training.
AB - Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks. In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training. Despite the low training latency enabled by fast edge aggregation, the device heterogeneity in computational resources deteriorates the efficiency. This paper proposes an asynchronous training algorithm to overcome this issue in SD-FEEL, where edge servers are allowed to independently set deadlines for the associated client nodes and trigger the model aggregation. To deal with different levels of model staleness, we design a staleness-aware aggregation scheme and analyze its convergence. Simulation results demonstrate the effectiveness of our proposed algorithm in achieving faster convergence and better learning performance than synchronous training.
KW - asynchronous training
KW - device heterogeneity
KW - Federated learning (FL)
KW - mobile edge computing (MEC)
UR - http://www.scopus.com/inward/record.url?scp=85137261910&partnerID=8YFLogxK
U2 - 10.1109/ICC45855.2022.9839045
DO - 10.1109/ICC45855.2022.9839045
M3 - Conference article published in proceeding or book
AN - SCOPUS:85137261910
T3 - IEEE International Conference on Communications
SP - 5196
EP - 5201
BT - ICC 2022 - IEEE International Conference on Communications
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 IEEE International Conference on Communications, ICC 2022
Y2 - 16 May 2022 through 20 May 2022
ER -