Abstract
Membership inference attacks (MIAs) compromise the privacy of training data through interrogating a victim machine learning model and inferring whether or not a query sample is in the training data. Existing defenses against MIAs include preprocessing the training data of the model, modifying loss functions, and perturbing the inference output. However, all these mechanisms have to change either the training or inference process, which might be out of reach of the defenders, especially when the models are deployed in a third-party cloud service. In this paper, we propose preprocessing the query samples before feeding them into the models for inference. Specifically, we design a Membership Cleanser module to remove the member information in the query sample by moving it closer to non-member area in the feature space. The membership cleanser does not modify the training or inference process of the machine learning model, so it can be applied to any machine learning system. Through extensive evaluation on four datasets against different models, our approach consistently outperforms the state-of-the-art defense mechanisms in resilience and practicality against various MIAs while retaining good inference accuracy.
| Original language | English |
|---|---|
| Pages (from-to) | 1-12 |
| Number of pages | 12 |
| Journal | IEEE Transactions on Dependable and Secure Computing |
| DOIs | |
| Publication status | Published - Jul 2024 |
Keywords
- Data models
- Data privacy
- Decoding
- Defense
- membership inference attack
- non-member representation space
- Optimization
- Standards
- Training
- Training data
ASJC Scopus subject areas
- General Computer Science
- Electrical and Electronic Engineering
Fingerprint
Dive into the research topics of 'VAE-Based Membership Cleanser Against Membership Inference Attacks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver