IAM

DAVIDSTUTZ

PUBLICATIONSBYYEAR

2021

David Stutz, Krishnamurthy (Dj) Dvijotham, Ali Taylan Cemgil, Arnaud Doucet.
Learning Optimal Conformal Classifiers.
ArXiv, 2021.
[ArXiv | Project Page]

David Stutz, Matthias Hein, Bernt Schiele.
Relating Adversarially Robust Generalization to Flat Minima.
ICCV, 2021.
[ArXiv | Project Page]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
ArXiv, 2021.
[ArXiv]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
On Mitigating Random and Adversarial Bit Errors.
MLSys, 2021.
[ArXiv | BibTeX | Project Page]

2020

David Stutz, Matthias Hein, Bernt Schiele.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ICML, 2020.
[ArXiv | BibTeX | Project Page]

David Stutz, Andreas Geiger.
Learning 3D Shape Completion under Weak Supervision.
International Journal of Computer Vision, 2020.
[DOI | ArXiv | BibTeX | Project Page]

2019

David Stutz, Matthias Hein, Bernt Schiele.
Disentangling Adversarial Robustness and Generalization.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[ArXiv | BibTeX | Project Page]

2018

David Stutz, Andreas Geiger.
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[PDF | BibTeX | Project Page]

David Stutz, Alexander Hermans, Bastian Leibe.
Superpixels: an evaluation of the state-of-the-art.
Computer Vision and Image Understanding, Volume 166, 2018.
[DOI | ArXiv | PDF | BibTeX | Project Page]

2015

David Stutz.
Superpixel segmentation: an evaluation.
German Conference on Pattern Recognition, 2015.
[PDF | BibTeX | Project Page]

RELATEDARTICLES

Articles and project pages related to the publications listed above. Also see Projects for an overview as well as THESES and SEMINAR PAPERS .

ARTICLE

ArXiv Pre-Print “Adversarial Training against Location-Optimized Adversarial Patches”

While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.

More ...

06thMAY2020

PROJECT

Adversarial training on location-optimized adversarial patches.

More ...

01stMARCH2020

PROJECT

Confidence calibration of adversarial training for “generalizable” robustness.

More ...

ARTICLE

Updated ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training yields robust models against a specific threat model. However, robustness does not generalize to larger perturbations or threat models not seen during training. Confidence-calibrated adversarial training tackles this problem by biasing the network towards low-confidence predictions on adversarial examples. Through rejecting low-confidence (adversarial) examples, robustness generalizes to various threat models, including L2, L1 and L0 while training only on L∞ adversarial examples. This article gives a short abstract, discusses relevant updates to the previous version and includes paper and appendix.

More ...

ARTICLE

ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training is the de-facto standard to obtain models robust against adversarial examples. However, on complex datasets, a significant loss in accuracy is incurred and the robustness does not generalize to attacks not used during training. This paper introduces confidence-calibrated adversarial training. By forcing the confidence on adversarial examples to decay with their distance to the training data, the loss in accuracy is reduced and robustness generalizes to other attacks and larger perturbations.

More ...

ARTICLE

CVPR Paper “Disentangling Adversarial Robustness and Generalization”

Our paper on adversarial robustness and generalization was accepted at CVPR’19. In the revised paper, we show that adversarial examples usually leave the manifold, including a brief theoretical argumentation. Similarly, adversarial examples can be found on the manifold; then, robustness is nothing else than generalization. For (off-manifold) adversarial examples, in contrast, we show that generalization and robustness are not necessarily contradicting objectives. As example, on synthetic data, we adversarially train a robust and accurate model. This article gives a short abstract and provides the paper including appendix.

More ...

04thDECEMBER2018

PROJECT

Disentangling the relationship between adversarial robustness and generalization.

More ...

ARTICLE

ArXiv Pre-Print “Disentangling Adversarial Robustness and Generalization”

To date, it is unclear whether we can obtain both accurate and robust deep networks — meaning deep networks that generalize well and resist adversarial examples. In this pre-print, we aim to disentangle the relationship between adversarial robustness and generalization. The paper is available on ArXiv.

More ...

ARTICLE

IJCV Paper “Learning 3D Shape Completion under Weak Supervision”

Our CVPR’18 follow-up paper has been accepted at IJCV. In this longer paper we extend our weakly-supervised 3D shape completion approach to obtain high-quality shape predictions, and also present updated, synthetic benchmarks on ShapeNet and ModelNet. The paper is available through Springer Link and ArXiv.

More ...

ARTICLE

ArXiv Pre-Print “Learning 3D Shape Completion under Weak Supervision”

In this follow-up on our CVPR’18 work, we extend our weakly-supervised 3D shape completion approach to obtain high-quality shape predictions, and also present updated, synthetic benchmarks on ShapeNet and ModelNet. The paper is now available as pre-print on ArXiv. Abstract, some experimental results and a comparison to our CVPR’18 work can be found in this article.

More ...