Abstract
Federated learning is promising in enabling large-scale machine learning by massive mobile devices without exposing the raw data of users with strong privacy concerns. Existing work of federated learning struggles for accelerating the learning process, but ignores the energy efficiency that is critical for resource-constrained mobile devices. In this paper, we propose to improve the energy efficiency of federated learning by lowering CPU-cycle frequency of mobile devices who are faster in the training group. Since all the devices are synchronized by iterations, the federated learning speed is preserved as long as they complete the training before the slowest device in each iteration. Based on this idea, we formulate an optimization problem aiming to minimize the total system cost that is defined as a weighted sum of training time and energy consumption. Due to the hardness of nonlinear constraints and unawareness of network quality, we design an experience-driven algorithm based on the Deep Reinforcement Learning (DRL), which can converge to the near-optimal solution without knowledge of network quality. Experiments on a small-scale testbed and large-scale simulations are conducted to evaluate our proposed algorithm. The results show that it outperforms the start-of-the-art by 40% at most.
Original language | English |
---|---|
Pages | 234-243 |
Number of pages | 10 |
DOIs | |
Publication status | Published - May 2020 |
Event | IEEE International Parallel and Distributed Processing Symposium, IPDPS 2020 - New Orleans, United States Duration: 18 May 2020 → 22 May 2020 |
Conference
Conference | IEEE International Parallel and Distributed Processing Symposium, IPDPS 2020 |
---|---|
Country/Territory | United States |
City | New Orleans |
Period | 18/05/20 → 22/05/20 |
Keywords
- deep reinforcement learning
- experience-driven
- federated learning
ASJC Scopus subject areas
- Computer Networks and Communications
- Hardware and Architecture
- Safety, Risk, Reliability and Quality