IAM

PUBLICATIONSBYYEAR

Find all publications at Google Scholar.

2023

David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet.
Conformal prediction under ambiguous ground truth.
ArXiv, 2023.
[ArXiv | Project Page]

David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, Yuan Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam.
Evaluating AI systems under uncertain ground truth: a case study in dermatology.
ArXiv, 2023.
[ArXiv | Project Page]

2022

David Stutz, Krishnamurthy (Dj) Dvijotham, Ali Taylan Cemgil, Arnaud Doucet.
Learning Optimal Conformal Classifiers.
ICLR, 2022.
[ArXiv | OpenReview | Project Page]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
TPAMI, 2022.
[ArXiv | IEEExplore | Project Page]

2021

David Stutz, Matthias Hein, Bernt Schiele.
Relating Adversarially Robust Generalization to Flat Minima.
ICCV, 2021.
[ArXiv | Project Page]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
On Mitigating Random and Adversarial Bit Errors.
MLSys, 2021.
[ArXiv | BibTeX | Project Page]

2020

David Stutz, Matthias Hein, Bernt Schiele.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ICML, 2020.
[ArXiv | BibTeX | Project Page]

David Stutz, Andreas Geiger.
Learning 3D Shape Completion under Weak Supervision.
International Journal of Computer Vision, 2020.
[DOI | ArXiv | BibTeX | Project Page]

2019

David Stutz, Matthias Hein, Bernt Schiele.
Disentangling Adversarial Robustness and Generalization.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[ArXiv | BibTeX | Project Page]

2018

David Stutz, Andreas Geiger.
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[PDF | BibTeX | Project Page]

David Stutz, Alexander Hermans, Bastian Leibe.
Superpixels: an evaluation of the state-of-the-art.
Computer Vision and Image Understanding, Volume 166, 2018.
[DOI | ArXiv | PDF | BibTeX | Project Page]

2015

David Stutz.
Superpixel segmentation: an evaluation.
German Conference on Pattern Recognition, 2015.
[PDF | BibTeX | Project Page]

RELATEDARTICLES

Articles and project pages related to the publications listed above. Also see Projects for an overview as well as THESES and SEMINAR PAPERS .

ARTICLE

ICML Paper “Confidence-Calibrated Adversarial Training”

Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.

More ...

ARTICLE

ArXiv Pre-Print “On Mitigating Random and Adversarial Bit Errors”

Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.

More ...

JUNE2020

PROJECT

Random and adversarial bit errors in quantized DNN weights.

More ...

ARTICLE

ArXiv Pre-Print “Adversarial Training against Location-Optimized Adversarial Patches”

While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.

More ...

MAY2020

PROJECT

Adversarial training on location-optimized adversarial patches.

More ...

MARCH2020

PROJECT

Confidence calibration of adversarial training for “generalizable” robustness.

More ...

ARTICLE

Updated ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training yields robust models against a specific threat model. However, robustness does not generalize to larger perturbations or threat models not seen during training. Confidence-calibrated adversarial training tackles this problem by biasing the network towards low-confidence predictions on adversarial examples. Through rejecting low-confidence (adversarial) examples, robustness generalizes to various threat models, including L2, L1 and L0 while training only on L∞ adversarial examples. This article gives a short abstract, discusses relevant updates to the previous version and includes paper and appendix.

More ...

ARTICLE

ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training is the de-facto standard to obtain models robust against adversarial examples. However, on complex datasets, a significant loss in accuracy is incurred and the robustness does not generalize to attacks not used during training. This paper introduces confidence-calibrated adversarial training. By forcing the confidence on adversarial examples to decay with their distance to the training data, the loss in accuracy is reduced and robustness generalizes to other attacks and larger perturbations.

More ...

ARTICLE

CVPR Paper “Disentangling Adversarial Robustness and Generalization”

Our paper on adversarial robustness and generalization was accepted at CVPR’19. In the revised paper, we show that adversarial examples usually leave the manifold, including a brief theoretical argumentation. Similarly, adversarial examples can be found on the manifold; then, robustness is nothing else than generalization. For (off-manifold) adversarial examples, in contrast, we show that generalization and robustness are not necessarily contradicting objectives. As example, on synthetic data, we adversarially train a robust and accurate model. This article gives a short abstract and provides the paper including appendix.

More ...

DECEMBER2018

PROJECT

Disentangling the relationship between adversarial robustness and generalization.

More ...