David Stutz, Matthias Hein, Bernt Schiele.
Disentangling Adversarial Robustness and Generalization.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[ArXiv | BibTeX | Project Page]
David Stutz, Andreas Geiger.
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[PDF | BibTeX | Project Page]
David Stutz, Alexander Hermans, Bastian Leibe.
Superpixels: an evaluation of the state-of-the-art.
Computer Vision and Image Understanding, Volume 166, 2018.
[DOI | ArXiv | PDF | BibTeX | Project Page]
While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.
Adversarial training yields robust models against a specific threat model. However, robustness does not generalize to larger perturbations or threat models not seen during training. Confidence-calibrated adversarial training tackles this problem by biasing the network towards low-confidence predictions on adversarial examples. Through rejecting low-confidence (adversarial) examples, robustness generalizes to various threat models, including L2, L1 and L0 while training only on L∞ adversarial examples. This article gives a short abstract, discusses relevant updates to the previous version and includes paper and appendix.
Adversarial training is the de-facto standard to obtain models robust against adversarial examples. However, on complex datasets, a significant loss in accuracy is incurred and the robustness does not generalize to attacks not used during training. This paper introduces confidence-calibrated adversarial training. By forcing the confidence on adversarial examples to decay with their distance to the training data, the loss in accuracy is reduced and robustness generalizes to other attacks and larger perturbations.
Our paper on adversarial robustness and generalization was accepted at CVPR’19. In the revised paper, we show that adversarial examples usually leave the manifold, including a brief theoretical argumentation. Similarly, adversarial examples can be found on the manifold; then, robustness is nothing else than generalization. For (off-manifold) adversarial examples, in contrast, we show that generalization and robustness are not necessarily contradicting objectives. As example, on synthetic data, we adversarially train a robust and accurate model. This article gives a short abstract and provides the paper including appendix.
To date, it is unclear whether we can obtain both accurate and robust deep networks — meaning deep networks that generalize well and resist adversarial examples. In this pre-print, we aim to disentangle the relationship between adversarial robustness and generalization. The paper is available on ArXiv.
Our CVPR’18 follow-up paper has been accepted at IJCV. In this longer paper we extend our weakly-supervised 3D shape completion approach to obtain high-quality shape predictions, and also present updated, synthetic benchmarks on ShapeNet and ModelNet. The paper is available through Springer Link and ArXiv.
In this follow-up on our CVPR’18 work, we extend our weakly-supervised 3D shape completion approach to obtain high-quality shape predictions, and also present updated, synthetic benchmarks on ShapeNet and ModelNet. The paper is now available as pre-print on ArXiv. Abstract, some experimental results and a comparison to our CVPR’18 work can be found in this article.