TY - GEN
T1 - Personalized Federated Learning with Contextualized Generalization
AU - Tang, Xueyang
AU - Guo, Song
AU - Guo, Jingcai
N1 - Funding Information:
This research was supported by fundings from the Key-Area Research and Development Program of Guangdong Province (No. 2021B0101400003), Hong Kong RGC Research Impact Fund (No. R5060-19), General Research Fund (No. 152221/19E, 152203/20E, and 152244/21E), the National Natural Science Foundation of China (61872310 and 62102327), and Shenzhen Science and Technology Innovation Commission (JCYJ20200109142008673).
Publisher Copyright:
© 2022 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2022/7
Y1 - 2022/7
N2 - The prevalent personalized federated learning (PFL) usually pursues a trade-off between personalization and generalization by maintaining a shared global model to guide the training process of local models. However, the sole global model may easily transfer deviated context knowledge to some local models when multiple latent contexts exist across the local datasets. In this paper, we propose a novel concept called contextualized generalization (CG) to provide each client with fine-grained context knowledge that can better fit the local data distributions and facilitate faster model convergence, based on which we properly design a framework of PFL, dubbed CGPFL. We conduct detailed theoretical analysis, in which the convergence guarantee is presented and O(√K) speedup over most existing methods is granted. To quantitatively study the generalization-personalization trade-off, we introduce the 'generalization error' measure and prove that the proposed CGPFL can achieve a better trade-off than existing solutions. Moreover, our theoretical analysis further inspires a heuristic algorithm to find a near-optimal trade-off in CGPFL. Experimental results on multiple real-world datasets show that our approach surpasses the state-of-the-art methods on test accuracy by a significant margin.
AB - The prevalent personalized federated learning (PFL) usually pursues a trade-off between personalization and generalization by maintaining a shared global model to guide the training process of local models. However, the sole global model may easily transfer deviated context knowledge to some local models when multiple latent contexts exist across the local datasets. In this paper, we propose a novel concept called contextualized generalization (CG) to provide each client with fine-grained context knowledge that can better fit the local data distributions and facilitate faster model convergence, based on which we properly design a framework of PFL, dubbed CGPFL. We conduct detailed theoretical analysis, in which the convergence guarantee is presented and O(√K) speedup over most existing methods is granted. To quantitatively study the generalization-personalization trade-off, we introduce the 'generalization error' measure and prove that the proposed CGPFL can achieve a better trade-off than existing solutions. Moreover, our theoretical analysis further inspires a heuristic algorithm to find a near-optimal trade-off in CGPFL. Experimental results on multiple real-world datasets show that our approach surpasses the state-of-the-art methods on test accuracy by a significant margin.
UR - http://www.scopus.com/inward/record.url?scp=85137077471&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2022/311
DO - 10.24963/ijcai.2022/311
M3 - Conference article published in proceeding or book
AN - SCOPUS:85137077471
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 2241
EP - 2247
BT - Proceedings of the 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
A2 - De Raedt, Luc
A2 - De Raedt, Luc
PB - International Joint Conferences on Artificial Intelligence
T2 - 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
Y2 - 23 July 2022 through 29 July 2022
ER -