TY - GEN
T1 - A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
AU - Pang, Ren
AU - Shen, Hua
AU - Zhang, Xinyang
AU - Ji, Shouling
AU - Vorobeychik, Yevgeniy
AU - Luo, Xiapu
AU - Liu, Alex
AU - Wang, Ting
N1 - Funding Information:
We thank our shepherd Xiangyu Zhang and anonymous reviewers for valuable feedbacks. This material is based upon work supported by the National Science Foundation under Grant No. 1910546, 1953813, and 1846151. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. S. Ji was partly supported by NSFC under No. U1936215, 61772466, and U1836202, the National Key Research and Development Program of China under No. 2018YFB0804102, the Zhejiang Provincial Natural Science Foundation for Distinguished Young Scholars under No. LR19F020003, the Zhejiang Provincial Key R&D Program under No. 2019C01055, and the Ant Financial Research Funding. X. Luo was partly supported by HK RGC Project (PolyU 152239/18E) and HKPolyU Research Grant (ZVQ8).
Publisher Copyright:
© 2020 ACM.
PY - 2020/10/30
Y1 - 2020/10/30
N2 - Despite their tremendous success in a range of domains, deep learning systems are inherently susceptible to two types of manipulations: adversarial inputs - maliciously crafted samples that deceive target deep neural network (DNN) models, and poisoned models - adversely forged DNNs that misbehave on pre-defined inputs. While prior work has intensively studied the two attack vectors in parallel, there is still a lack of understanding about their fundamental connections: what are the dynamic interactions between the two attack vectors? what are the implications of such interactions for optimizing existing attacks? what are the potential countermeasures against the enhanced attacks? Answering these key questions is crucial for assessing and mitigating the holistic vulnerabilities of DNNs deployed in realistic settings. Here we take a solid step towards this goal by conducting the first systematic study of the two attack vectors within a unified framework. Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement"effects between the two attack vectors - leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e.g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.
AB - Despite their tremendous success in a range of domains, deep learning systems are inherently susceptible to two types of manipulations: adversarial inputs - maliciously crafted samples that deceive target deep neural network (DNN) models, and poisoned models - adversely forged DNNs that misbehave on pre-defined inputs. While prior work has intensively studied the two attack vectors in parallel, there is still a lack of understanding about their fundamental connections: what are the dynamic interactions between the two attack vectors? what are the implications of such interactions for optimizing existing attacks? what are the potential countermeasures against the enhanced attacks? Answering these key questions is crucial for assessing and mitigating the holistic vulnerabilities of DNNs deployed in realistic settings. Here we take a solid step towards this goal by conducting the first systematic study of the two attack vectors within a unified framework. Specifically, (i) we develop a new attack model that jointly optimizes adversarial inputs and poisoned models; (ii) with both analytical and empirical evidence, we reveal that there exist intriguing "mutual reinforcement"effects between the two attack vectors - leveraging one vector significantly amplifies the effectiveness of the other; (iii) we demonstrate that such effects enable a large design spectrum for the adversary to enhance the existing attacks that exploit both vectors (e.g., backdoor attacks), such as maximizing the attack evasiveness with respect to various detection methods; (iv) finally, we discuss potential countermeasures against such optimized attacks and their technical challenges, pointing to several promising research directions.
KW - adversarial attack
KW - backdoor attack
KW - trojaning attack
UR - http://www.scopus.com/inward/record.url?scp=85094742924&partnerID=8YFLogxK
U2 - 10.1145/3372297.3417253
DO - 10.1145/3372297.3417253
M3 - Conference article published in proceeding or book
AN - SCOPUS:85094742924
T3 - Proceedings of the ACM Conference on Computer and Communications Security
SP - 85
EP - 99
BT - CCS 2020 - Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security
PB - Association for Computing Machinery
T2 - 27th ACM SIGSAC Conference on Computer and Communications Security, CCS 2020
Y2 - 9 November 2020 through 13 November 2020
ER -