IAM

PUBLICATIONSBYYEAR

Find all publications at Google Scholar.

2023

David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet.
Conformal prediction under ambiguous ground truth.
ArXiv, 2023.
[ArXiv | Project Page]

David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, Yuan Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam.
Evaluating AI systems under uncertain ground truth: a case study in dermatology.
ArXiv, 2023.
[ArXiv | Project Page]

2022

David Stutz, Krishnamurthy (Dj) Dvijotham, Ali Taylan Cemgil, Arnaud Doucet.
Learning Optimal Conformal Classifiers.
ICLR, 2022.
[ArXiv | OpenReview | Project Page]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
TPAMI, 2022.
[ArXiv | IEEExplore | Project Page]

2021

David Stutz, Matthias Hein, Bernt Schiele.
Relating Adversarially Robust Generalization to Flat Minima.
ICCV, 2021.
[ArXiv | Project Page]

David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
On Mitigating Random and Adversarial Bit Errors.
MLSys, 2021.
[ArXiv | BibTeX | Project Page]

2020

David Stutz, Matthias Hein, Bernt Schiele.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ICML, 2020.
[ArXiv | BibTeX | Project Page]

David Stutz, Andreas Geiger.
Learning 3D Shape Completion under Weak Supervision.
International Journal of Computer Vision, 2020.
[DOI | ArXiv | BibTeX | Project Page]

2019

David Stutz, Matthias Hein, Bernt Schiele.
Disentangling Adversarial Robustness and Generalization.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[ArXiv | BibTeX | Project Page]

2018

David Stutz, Andreas Geiger.
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[PDF | BibTeX | Project Page]

David Stutz, Alexander Hermans, Bastian Leibe.
Superpixels: an evaluation of the state-of-the-art.
Computer Vision and Image Understanding, Volume 166, 2018.
[DOI | ArXiv | PDF | BibTeX | Project Page]

2015

David Stutz.
Superpixel segmentation: an evaluation.
German Conference on Pattern Recognition, 2015.
[PDF | BibTeX | Project Page]

RELATEDARTICLES

Articles and project pages related to the publications listed above. Also see Projects for an overview as well as THESES and SEMINAR PAPERS .

ARTICLE

ICML 2022 Art of Robustness Paper “On Fragile Features and Batch Normalization in Adversarial Training”

While batch normalization has long been argued to increase adversarial vulnerability, it is still used in state-of-the-art adversarial training models. This is likely because of easier training and increased expressiveness. At the same time, recent papers argue that adversarial examples are partly caused by fragile features caused by learning spurious correlations. In this paper, we study the impact of batch normalization on utilizing these fragile features for robustness by fine-tuning only the batch normalization layers.

More ...

AUGUST2022

PROJECT

RESEARCH Fragile Features, Batch Normalization and Adversarial Training Outline Abstract Paper Poster News & Updates This is work led by Nils Walter. Quick links: Paper | Poster Abstract Modern deep learning architecture utilize batch normalization (BN) to stabilize training and improve accuracy. It has been shown that the BN layers alone are surprisingly expressive. In […]

More ...

AUGUST2022

PROJECT

Improving corruption and adversarial robustness by enhancing weak sub-networks.

More ...

ARTICLE

ArXiv Pre-Print “Learning Optimal Conformal Classifiers”

Conformal prediction (CP) allows to take any classifier and turn it into a set predictor with a guarantee that the true class is included with user-specified probability. This allows to develop classifiers with sufficient guarantees for safe deployment in many domains. However, CP is usually used as a post-training calibration step. Our paper presented in this article presents a training procedure name conformal training allowing to train classifier and conformal predictor end-to-end. This can reduce the average confidence set size and allows to optimize arbitrary objectives defined directly on the predicted sets.

More ...

OCTOBER2021

PROJECT

End-to-end training of deep neural networks and conformal predictors to reduce confidence set size and optimizer application-specific objectives.

More ...

JULY2021

PROJECT

Random and adversarial bit error robustness of DNNs for energy-efficient and secure DNN accelerators.

More ...

JULY2021

PROJECT

Robust generalization and overfitting linked to flatness of robust loss surface in weight space.

More ...

ARTICLE

ArXiv Pre-Print “Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators”

Deep neural network (DNN) accelerators are popular due to reduced cost and energy compared to GPUs. To further reduce energy consumption, the operating voltage of the on-chip memory can be reduced. However, this injects random bit errors, directly impacting the (quantized) DNN weights. As result, improving DNN robustness against these bit errors can significantly improve energy efficiency. Similarly, these chips are subject to bit-level hardware- or software-based attacks. In this case, robustness against adversarial bit errors is required to improve security of DNN accelerators. Our paper presented in this article addresses both problems.

More ...

ARTICLE

ArXiv Pre-Print “Relating Adversarially Robust Generalization to Flat Minima”

Recent work on robustness againt adversarial examples identified a severe problem in adversarial training: (robust) overfitting. That is, during training the training robustness continuously increases, while test robustness starts decreasing eventually. In this pre-print, we relate robust overfitting and good robust generalization to flatness around the found minimum in the robust loss landscape with respect to perturbations in the weights.

More ...

ARTICLE

Updated Pre-Print “Bit Error Robustness for Energy-Efficient DNN Accelerators “

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

More ...