TY - GEN
T1 - Federated split GANs
AU - Kortoçi, Pranvera
AU - Liang, Yilei
AU - Zhou, Pengyuan
AU - Lee, Lik Hang
AU - Mehrabi, Abbas
AU - Hui, Pan
AU - Tarkoma, Sasu
AU - Crowcroft, Jon
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/17
Y1 - 2022/10/17
N2 - Mobile devices and the immense amount and variety of data they generate are key enablers of machine learning (ML)-based applications. Traditional ML techniques have shifted toward new paradigms such as federated learning (FL) and split learning (SL) to improve the protection of user's data privacy. However, SL often relies on server(s) located in the edge or cloud to train computationally-heavy parts of an ML model to avoid draining the limited resource on client devices, potentially resulting in exposure of device data to such third parties. This work proposes an alternative approach to train computationally heavy ML models in user's devices themselves, where corresponding device data resides. Specifically, we focus on GANs (generative adversarial networks) and leverage their network architecture to preserve data privacy. We train the discriminative part of a GAN on user's devices with their data, whereas the generative model is trained remotely (e.g., server) for which there is no need to access device true data. Moreover, our approach ensures that the computational load of training the discriminative model is shared among user's devices - proportional to their computation capabilities - by means of SL. We implement our proposed collaborative training scheme of a computationally-heavy GAN model in simulated resource-constrained devices. The results show that our system preserves data privacy, keeps a short training time, and yields the same model accuracy as when the model is trained on devices with unconstrained resources (e.g., cloud).
AB - Mobile devices and the immense amount and variety of data they generate are key enablers of machine learning (ML)-based applications. Traditional ML techniques have shifted toward new paradigms such as federated learning (FL) and split learning (SL) to improve the protection of user's data privacy. However, SL often relies on server(s) located in the edge or cloud to train computationally-heavy parts of an ML model to avoid draining the limited resource on client devices, potentially resulting in exposure of device data to such third parties. This work proposes an alternative approach to train computationally heavy ML models in user's devices themselves, where corresponding device data resides. Specifically, we focus on GANs (generative adversarial networks) and leverage their network architecture to preserve data privacy. We train the discriminative part of a GAN on user's devices with their data, whereas the generative model is trained remotely (e.g., server) for which there is no need to access device true data. Moreover, our approach ensures that the computational load of training the discriminative model is shared among user's devices - proportional to their computation capabilities - by means of SL. We implement our proposed collaborative training scheme of a computationally-heavy GAN model in simulated resource-constrained devices. The results show that our system preserves data privacy, keeps a short training time, and yields the same model accuracy as when the model is trained on devices with unconstrained resources (e.g., cloud).
KW - federated learning
KW - GAN
KW - split learning
UR - http://www.scopus.com/inward/record.url?scp=85141243986&partnerID=8YFLogxK
U2 - 10.1145/3556557.3557953
DO - 10.1145/3556557.3557953
M3 - Conference article published in proceeding or book
AN - SCOPUS:85141243986
T3 - FedEdge 2022 - Proceedings of the 2022 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network
SP - 25
EP - 30
BT - FedEdge 2022 - Proceedings of the 2022 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network
PB - Association for Computing Machinery, Inc
T2 - 1st ACM Workshop on Data Privacy and Federated Learning Technologies for Mobile Edge Network, FedEdge 2022
Y2 - 17 October 2022
ER -