Abstract
Multi-task learning (MTL) supports simultaneous training over multiple related tasks and learns the shared representation. While improving the generalization ability of training on a single task, MTL has higher privacy risk than traditional single-task learning because more sensitive information is extracted and learned in a correlated manner. Unfortunately, very few works have attempted to address the privacy risks posed by MTL. In this paper, we first investigate such risk by designing model extraction attack (MEA) and membership inference attack (MIA) in MTL. Then we evaluate the privacy risks on six MTL model architectures and two popular MTL datasets, whose results show that both the number of tasks and the complexity of training data play an important role in the attack performance. Our investigation shows that MTL is more vulnerable than traditional single-task learning under both attacks.
Original language | English |
---|---|
Article number | 10050399 |
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Dependable and Secure Computing |
DOIs | |
Publication status | Accepted/In press - 2023 |
Keywords
- Membership inference attacks
- model extraction attacks
- multi-task learning
- privacy threaten
ASJC Scopus subject areas
- General Computer Science
- Electrical and Electronic Engineering