Abstract
Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples. However, AT exhibits severe robust overfitting. In practice, this leads to poor robust generalization, i.e., adversarial robustness does not generalize well to new examples. In this talk, I want to present our work on the relationship between robust generalization and flatness of the robust loss landscape in weight space. I will propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness. For example, throughout training, flatness reduces significantly during overfitting such that early stopping effectively finds flatter minima in the robust loss landscape. Similarly, AT variants achieving higher adversarial robustness also correspond to flatter minima. This holds for many popular choices, e.g., AT-AWP, TRADES, MART, AT with self-supervision or additional unlabeled examples, as well as simple regularization techniques, e.g., AutoAugment, weight decay or label noise.
Slides Paper covered:David Stutz, Matthias Hein, Bernt Schiele. Relating Adversarially Robust Generalization to Flat Minima. ArXiv, 2021.
Recording
Due to German data protection laws, it is currently not possible to include the video directly, so please head over to YouTube using the link below!
Watch on YouTube