TY - GEN
T1 - Client-Edge-Cloud Hierarchical Federated Learning
AU - Liu, Lumin
AU - Zhang, Jun
AU - Song, S. H.
AU - Letaief, Khaled B.
PY - 2020/6
Y1 - 2020/6
N2 - Federated Learning is a collaborative machine learning framework to train a deep learning model without accessing clients' private data. Previous works assume one central parameter server either at the cloud or at the edge. The cloud server can access more data but with excessive communication overhead and long latency, while the edge server enjoys more efficient communications with the clients. To combine their advantages, we propose a client-edge-cloud hierarchical Federated Learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation. In this way, the model can be trained faster and better communication-computation trade-offs can be achieved. Convergence analysis is provided for HierFAVG and the effects of key parameters are also investigated, which lead to qualitative design guidelines. Empirical experiments verify the analysis and demonstrate the benefits of this hierarchical architecture in different data distribution scenarios. Particularly, it is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
AB - Federated Learning is a collaborative machine learning framework to train a deep learning model without accessing clients' private data. Previous works assume one central parameter server either at the cloud or at the edge. The cloud server can access more data but with excessive communication overhead and long latency, while the edge server enjoys more efficient communications with the clients. To combine their advantages, we propose a client-edge-cloud hierarchical Federated Learning system, supported with a HierFAVG algorithm that allows multiple edge servers to perform partial model aggregation. In this way, the model can be trained faster and better communication-computation trade-offs can be achieved. Convergence analysis is provided for HierFAVG and the effects of key parameters are also investigated, which lead to qualitative design guidelines. Empirical experiments verify the analysis and demonstrate the benefits of this hierarchical architecture in different data distribution scenarios. Particularly, it is shown that by introducing the intermediate edge servers, the model training time and the energy consumption of the end devices can be simultaneously reduced compared to cloud-based Federated Learning.
KW - Edge Learning
KW - Federated Learning
KW - Mobile Edge Computing
UR - http://www.scopus.com/inward/record.url?scp=85089410722&partnerID=8YFLogxK
U2 - 10.1109/ICC40277.2020.9148862
DO - 10.1109/ICC40277.2020.9148862
M3 - Conference article published in proceeding or book
AN - SCOPUS:85089410722
T3 - IEEE International Conference on Communications
BT - 2020 IEEE International Conference on Communications, ICC 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Communications, ICC 2020
Y2 - 7 June 2020 through 11 June 2020
ER -