IAM

ARTICLE

Code Released: Adversarial Robust Generalization and Flatness

The code for my ICCV’21 paper relating adversarial robustness to flatness in the (robust) loss landscape is now available on GitHub. The repository includes implementations of various adversarial attacks, adversarial training variants and “attacks” on model weights in order to measure robust flatness.

Introduction

Adversarial training, i.e., training on adversarial examples generated on-the-fly, has been shown to suffer from server overfitting. This means that robustness on test examples does not continuously decrease throughout training. As result, early stopping needs to be applied to obtain state-of-the-art adversarial robustness. Recently, it was argued that robust overfitting can be avoided by encouraging flat minima in the (robust) loss landscape with respect to weight perturbations. My ICCV'21 paper empirically confirms this hypothesis by showing that flatness consistently improves adversarial robustness.

The code corresponding to the paper is now available on GitHub:

Robust Flatness on GitHub

The corresponding paper is available on ArXiv; also check out the project page:

Paper on ArXiv
@article{Stutz2021ICCV,
    author    = {David Stutz and Matthias Hein and Bernt Schiele},
    title     = {Relating Adversarially Robust Generalization to Flat Minima},
    booktitle = {IEEE International Conference on Computer Vision (ICCV)},
    publisher = {IEEE Computer Society},
    year      = {2021}
}
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.