TY - GEN
T1 - Multi-label Adversarial Perturbations
AU - Song, Qingquan
AU - Jin, Haifeng
AU - Huang, Xiao
AU - Hu, Xia
PY - 2018/12/27
Y1 - 2018/12/27
N2 - Adversarial examples are delicately perturbed inputs, which aim to mislead machine learning models towards incorrect outputs. While existing work focuses on generating adversarial perturbations in multiclass classification problems, many real-world applications fall into the multi-label setting, in which one instance could be associated with more than one label. To analyze the vulnerability and robustness of multi-label learning models, we investigate the generation of multi-label adversarial perturbations. This is a challenging task due to the uncertain number of positive labels associated with one instance, and the fact that multiple labels are usually not mutually exclusive with each other. To bridge the gap, in this paper, we propose a general attacking framework targeting multi-label classification problem and conduct a premier analysis on the perturbations for deep neural networks. Leveraging the ranking relationships among labels, we further design a ranking-based framework to attack multi-label ranking algorithms. Experiments on two different datasets demonstrate the effectiveness of the proposed frameworks and provide insights of the vulnerability of multi-label deep models under diverse targeted attacks.
AB - Adversarial examples are delicately perturbed inputs, which aim to mislead machine learning models towards incorrect outputs. While existing work focuses on generating adversarial perturbations in multiclass classification problems, many real-world applications fall into the multi-label setting, in which one instance could be associated with more than one label. To analyze the vulnerability and robustness of multi-label learning models, we investigate the generation of multi-label adversarial perturbations. This is a challenging task due to the uncertain number of positive labels associated with one instance, and the fact that multiple labels are usually not mutually exclusive with each other. To bridge the gap, in this paper, we propose a general attacking framework targeting multi-label classification problem and conduct a premier analysis on the perturbations for deep neural networks. Leveraging the ranking relationships among labels, we further design a ranking-based framework to attack multi-label ranking algorithms. Experiments on two different datasets demonstrate the effectiveness of the proposed frameworks and provide insights of the vulnerability of multi-label deep models under diverse targeted attacks.
KW - Adversarial attack
KW - Adversarial machine learning
KW - Multi label learning
UR - http://www.scopus.com/inward/record.url?scp=85061378038&partnerID=8YFLogxK
U2 - 10.1109/ICDM.2018.00166
DO - 10.1109/ICDM.2018.00166
M3 - Conference article published in proceeding or book
AN - SCOPUS:85061378038
T3 - Proceedings - IEEE International Conference on Data Mining, ICDM
SP - 1242
EP - 1247
BT - 2018 IEEE International Conference on Data Mining, ICDM 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 18th IEEE International Conference on Data Mining, ICDM 2018
Y2 - 17 November 2018 through 20 November 2018
ER -