Abstract
Collective learning methods exploit relations among data points to enhance classification performance. However, such relations, represented as edges in the underlying graphical model, expose an extra attack surface to the adversaries. We study adversarial robustness of an important class of such graphical models, Associative Markov Networks (AMN), to structural attacks, where an attacker can modify the graph structure at test time. We formulate the task of learning a robust AMN classifier as a bi-level program, where the inner problem is a challenging non-linear integer program that computes optimal structural changes to the AMN. To address this technical challenge, we first relax the attacker problem, and then use duality to obtain a convex quadratic upper bound for the robust AMN problem. We then prove a bound on the quality of the resulting approximately optimal solutions, and experimentally demonstrate the efficacy of our approach. Finally, we apply our approach in a transductive learning setting, and show that robust AMN is much more robust than state-of-the-art deep learning methods, while sacrificing little in accuracy on non-adversarial data.
Original language | English |
---|---|
Pages | 250-259 |
Number of pages | 10 |
Publication status | Published - 2020 |
Externally published | Yes |
Event | 36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 - Virtual, Online Duration: 3 Aug 2020 → 6 Aug 2020 |
Conference
Conference | 36th Conference on Uncertainty in Artificial Intelligence, UAI 2020 |
---|---|
City | Virtual, Online |
Period | 3/08/20 → 6/08/20 |
ASJC Scopus subject areas
- Artificial Intelligence