MTL-Leak: Privacy Risk Assessment in Multi-Task Learning

Hongyang Yan, Anli Yan, Li Hu, Jiaming Liang, Haibo Hu

Research output: Journal article publicationJournal articleAcademic researchpeer-review

1 Citation (Scopus)

Abstract

Multi-task learning (MTL) supports simultaneous training over multiple related tasks and learns the shared representation. While improving the generalization ability of training on a single task, MTL has higher privacy risk than traditional single-task learning because more sensitive information is extracted and learned in a correlated manner. Unfortunately, very few works have attempted to address the privacy risks posed by MTL. In this paper, we first investigate such risk by designing model extraction attack (MEA) and membership inference attack (MIA) in MTL. Then we evaluate the privacy risks on six MTL model architectures and two popular MTL datasets, whose results show that both the number of tasks and the complexity of training data play an important role in the attack performance. Our investigation shows that MTL is more vulnerable than traditional single-task learning under both attacks.

Original languageEnglish
Article number10050399
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Dependable and Secure Computing
DOIs
Publication statusAccepted/In press - 2023

Keywords

  • Membership inference attacks
  • model extraction attacks
  • multi-task learning
  • privacy threaten

ASJC Scopus subject areas

  • General Computer Science
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'MTL-Leak: Privacy Risk Assessment in Multi-Task Learning'. Together they form a unique fingerprint.

Cite this