TY - GEN
T1 - Probabilistic Device Scheduling for Over-the-Air Federated Learning
AU - Sun, Yuchang
AU - Lin, Zehong
AU - Mao, Yuyi
AU - Jin, Shi
AU - Zhang, Jun
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/10
Y1 - 2023/10
N2 - Federated learning (FL) is an emerging distributed training scheme where edge devices collaboratively train a model by uploading model updates instead of private data. To address the communication bottleneck, over-the-air (OTA) computation has been introduced to FL, which allows multiple edge devices to upload their gradient updates concurrently for aggregation. However, OTA computation is plagued by the communication error, which is critically affected by the device selection policy and impacts the performance of the output model. In this paper, we propose a probabilistic device selection scheme PO-FL, which effectively enhances the convergence performance of over-the-air FL. Specifically, each device is selected for OTA computation according to the predetermined probability, and its local update is scaled by this probability. By analyzing the convergence of PO-FL, we show that its convergence is determined by the device selection via the communication error and the variance of global update. Then, we propose a device selection algorithm that jointly considers the channel condition and gradient update importance of edge devices to optimize their selection probabilities. The experimental results on the MNIST dataset demonstrate that the proposed algorithm converges faster and learns better models than the baselines.
AB - Federated learning (FL) is an emerging distributed training scheme where edge devices collaboratively train a model by uploading model updates instead of private data. To address the communication bottleneck, over-the-air (OTA) computation has been introduced to FL, which allows multiple edge devices to upload their gradient updates concurrently for aggregation. However, OTA computation is plagued by the communication error, which is critically affected by the device selection policy and impacts the performance of the output model. In this paper, we propose a probabilistic device selection scheme PO-FL, which effectively enhances the convergence performance of over-the-air FL. Specifically, each device is selected for OTA computation according to the predetermined probability, and its local update is scaled by this probability. By analyzing the convergence of PO-FL, we show that its convergence is determined by the device selection via the communication error and the variance of global update. Then, we propose a device selection algorithm that jointly considers the channel condition and gradient update importance of edge devices to optimize their selection probabilities. The experimental results on the MNIST dataset demonstrate that the proposed algorithm converges faster and learns better models than the baselines.
KW - channel awareness
KW - device scheduling
KW - Federated learning (FL)
KW - gradient importance
KW - over-the-air computation (AirComp)
UR - http://www.scopus.com/inward/record.url?scp=85179422409&partnerID=8YFLogxK
U2 - 10.1109/ICCT59356.2023.10419829
DO - 10.1109/ICCT59356.2023.10419829
M3 - Conference article published in proceeding or book
AN - SCOPUS:85179422409
T3 - International Conference on Communication Technology Proceedings, ICCT
SP - 746
EP - 751
BT - 2023 IEEE 23rd International Conference on Communication Technology
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 23rd IEEE International Conference on Communication Technology, ICCT 2023
Y2 - 20 October 2023 through 22 October 2023
ER -