TY - GEN
T1 - Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints
AU - Li, Minghui
AU - Wan, Wei
AU - Lu, Jianrong
AU - Hu, Shengshan
AU - Shi, Junyu
AU - Zhang, Leo Yu
AU - Zhou, Man
AU - Zheng, Yifeng
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/12
Y1 - 2022/12
N2 - Federated learning is a newly emerging distributed learning framework that facilitates the collaborative training of a shared global model among distributed participants with their privacy preserved. However, federated learning systems are vulnerable to Byzantine attacks from malicious participants, who can upload carefully crafted local model updates to degrade the quality of the global model and even leave a backdoor. While this problem has received significant attention recently, current defensive schemes heavily rely on various assumptions, such as a fixed Byzantine model, availability of participants' local data, minority attackers, IID data distribution, etc. To relax those constraints, this paper presents Robust-FL, the first prediction-based Byzantine-robust federated learning scheme where none of the assumptions is leveraged. The core idea of the Robust-FL is exploiting historical global model to construct an estimator based on which the local models will be filtered through similarity detection. We then cluster local models to adaptively adjust the acceptable differences between the local models and the estimator such that Byzantine users can be identified. Extensive experiments over different datasets show that our approach achieves the following advantages simultaneously: (i) independence of participants' local data, (ii) tolerance of majority attackers, (iii) generalization to variable Byzantine model.
AB - Federated learning is a newly emerging distributed learning framework that facilitates the collaborative training of a shared global model among distributed participants with their privacy preserved. However, federated learning systems are vulnerable to Byzantine attacks from malicious participants, who can upload carefully crafted local model updates to degrade the quality of the global model and even leave a backdoor. While this problem has received significant attention recently, current defensive schemes heavily rely on various assumptions, such as a fixed Byzantine model, availability of participants' local data, minority attackers, IID data distribution, etc. To relax those constraints, this paper presents Robust-FL, the first prediction-based Byzantine-robust federated learning scheme where none of the assumptions is leveraged. The core idea of the Robust-FL is exploiting historical global model to construct an estimator based on which the local models will be filtered through similarity detection. We then cluster local models to adaptively adjust the acceptable differences between the local models and the estimator such that Byzantine users can be identified. Extensive experiments over different datasets show that our approach achieves the following advantages simultaneously: (i) independence of participants' local data, (ii) tolerance of majority attackers, (iii) generalization to variable Byzantine model.
KW - Byzantine Attacks
KW - Byzantine Robustness
KW - Federated Learning
KW - Privacy Protection
UR - http://www.scopus.com/inward/record.url?scp=85152258212&partnerID=8YFLogxK
U2 - 10.1109/MSN57253.2022.00040
DO - 10.1109/MSN57253.2022.00040
M3 - Conference article published in proceeding or book
AN - SCOPUS:85152258212
T3 - Proceedings - 2022 18th International Conference on Mobility, Sensing and Networking, MSN 2022
SP - 178
EP - 185
BT - Proceedings - 2022 18th International Conference on Mobility, Sensing and Networking, MSN 2022
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th International Conference on Mobility, Sensing and Networking, MSN 2022
Y2 - 14 December 2022 through 16 December 2022
ER -