Find all publications at Google Scholar.
David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet.
Conformal prediction under ambiguous ground truth.
ArXiv, 2023.
[ArXiv | Project Page]
David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, Yuan Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam.
Evaluating AI systems under uncertain ground truth: a case study in dermatology.
ArXiv, 2023.
[ArXiv | Project Page]
David Stutz, Krishnamurthy (Dj) Dvijotham, Ali Taylan Cemgil, Arnaud Doucet.
Learning Optimal Conformal Classifiers.
ICLR, 2022.
[ArXiv | OpenReview | Project Page]
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
TPAMI, 2022.
[ArXiv | IEEExplore | Project Page]
David Stutz, Matthias Hein, Bernt Schiele.
Relating Adversarially Robust Generalization to Flat Minima.
ICCV, 2021.
[ArXiv | Project Page]
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
On Mitigating Random and Adversarial Bit Errors.
MLSys, 2021.
[ArXiv | BibTeX | Project Page]
David Stutz, Matthias Hein, Bernt Schiele.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ICML, 2020.
[ArXiv | BibTeX | Project Page]
David Stutz, Andreas Geiger.
Learning 3D Shape Completion under Weak Supervision.
International Journal of Computer Vision, 2020.
[DOI | ArXiv | BibTeX | Project Page]
David Stutz, Matthias Hein, Bernt Schiele.
Disentangling Adversarial Robustness and Generalization.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[ArXiv | BibTeX | Project Page]
David Stutz, Andreas Geiger.
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[PDF | BibTeX | Project Page]
David Stutz, Alexander Hermans, Bastian Leibe.
Superpixels: an evaluation of the state-of-the-art.
Computer Vision and Image Understanding, Volume 166, 2018.
[DOI | ArXiv | PDF | BibTeX | Project Page]
David Stutz.
Superpixel segmentation: an evaluation.
German Conference on Pattern Recognition, 2015.
[PDF | BibTeX | Project Page]
Robust generalization and overfitting linked to flatness of robust loss surface in weight space.
Deep neural network (DNN) accelerators are popular due to reduced cost and energy compared to GPUs. To further reduce energy consumption, the operating voltage of the on-chip memory can be reduced. However, this injects random bit errors, directly impacting the (quantized) DNN weights. As result, improving DNN robustness against these bit errors can significantly improve energy efficiency. Similarly, these chips are subject to bit-level hardware- or software-based attacks. In this case, robustness against adversarial bit errors is required to improve security of DNN accelerators. Our paper presented in this article addresses both problems.
Recent work on robustness againt adversarial examples identified a severe problem in adversarial training: (robust) overfitting. That is, during training the training robustness continuously increases, while test robustness starts decreasing eventually. In this pre-print, we relate robust overfitting and good robust generalization to flatness around the found minimum in the robust loss landscape with respect to perturbations in the weights.
Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.
Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.
Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.
Random and adversarial bit errors in quantized DNN weights.
While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.
Adversarial training on location-optimized adversarial patches.
Confidence calibration of adversarial training for “generalizable” robustness.