IAM

DAVIDSTUTZ

ARTICLE

Machine Learning Security Seminar Talk “Relating Adversarially Robust Generalization to Flat Minima”

This week I was honored to speak at the Machine Learning Security Seminar organized by the Pattern Recognition and Applications Lab at University of Cagliari. I presented my work on relating adversarial robustness to flatness in the robust loss landscape, also touching on the relationship to weight robustness. In this article, I want to share the recording and slides of this talk.

Abstract

Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples. However, AT exhibits severe robust overfitting. In practice, this leads to poor robust generalization, i.e., adversarial robustness does not generalize well to new examples. In this talk, I want to present our work on the relationship between robust generalization and flatness of the robust loss landscape in weight space. I will propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness. For example, throughout training, flatness reduces significantly during overfitting such that early stopping effectively finds flatter minima in the robust loss landscape. Similarly, AT variants achieving higher adversarial robustness also correspond to flatter minima. This holds for many popular choices, e.g., AT-AWP, TRADES, MART, AT with self-supervision or additional unlabeled examples, as well as simple regularization techniques, e.g., AutoAugment, weight decay or label noise.

Slides Paper covered:
David Stutz, Matthias Hein, Bernt Schiele. Relating Adversarially Robust Generalization to Flat Minima. ArXiv, 2021.

Recording

What is your opinion on this article? Did you find it interesting or useful? Let me know your thoughts in the comments below: