TY - GEN
T1 - MExMI: Pool-based Active Model Extraction Crossover Membership Inference
AU - Xiao, Yaxin
AU - Ye, Qingqing
AU - Hu, Haibo
AU - Zheng, Huadi
AU - Fang, Chengfang
AU - Shi, Jie
N1 - Funding Information:
This work was supported by National Natural Science Foundation of China (Grant No: 62072390, 62102334), the Research Grants Council, Hong Kong SAR, China (Grant No: 15222118, 15218919, 15203120, 15226221, 15225921, and C2004-21GF), and a Huawei research grant (TC20200831001).
Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022/12
Y1 - 2022/12
N2 - With increasing popularity of Machine Learning as a Service (MLaaS), ML models trained from public and proprietary data are deployed in the cloud and deliver prediction services to users. However, as the prediction API becomes a new attack surface, growing concerns have arisen on the confidentiality of ML models. Existing literatures show their vulnerability under model extraction (ME) attacks, while their private training data is vulnerable to another type of attacks, namely, membership inference (MI). In this paper, we show that ME and MI can reinforce each other through a chained and iterative reaction, which can significantly boost ME attack accuracy and improve MI by saving the query cost. As such, we build a framework MExMI for pool-based active model extraction (PAME) to exploit MI through three modules: “MI Pre-Filter”, “MI Post-Filter”, and “semi-supervised boosting”. Experimental results show that MExMI can improve up to 11.14% from the best known PAME attack and reach 94.07% fidelity with only 16k queries. Furthermore, the accuracy, precision and recall of the MI attack in MExMI are on par with state-of-the-art MI attack which needs 150k queries.
AB - With increasing popularity of Machine Learning as a Service (MLaaS), ML models trained from public and proprietary data are deployed in the cloud and deliver prediction services to users. However, as the prediction API becomes a new attack surface, growing concerns have arisen on the confidentiality of ML models. Existing literatures show their vulnerability under model extraction (ME) attacks, while their private training data is vulnerable to another type of attacks, namely, membership inference (MI). In this paper, we show that ME and MI can reinforce each other through a chained and iterative reaction, which can significantly boost ME attack accuracy and improve MI by saving the query cost. As such, we build a framework MExMI for pool-based active model extraction (PAME) to exploit MI through three modules: “MI Pre-Filter”, “MI Post-Filter”, and “semi-supervised boosting”. Experimental results show that MExMI can improve up to 11.14% from the best known PAME attack and reach 94.07% fidelity with only 16k queries. Furthermore, the accuracy, precision and recall of the MI attack in MExMI are on par with state-of-the-art MI attack which needs 150k queries.
UR - http://www.scopus.com/inward/record.url?scp=85163166696&partnerID=8YFLogxK
M3 - Conference article published in proceeding or book
AN - SCOPUS:85163166696
T3 - Advances in Neural Information Processing Systems
SP - 1
EP - 14
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -