Find all publications at Google Scholar.
David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet.
Conformal prediction under ambiguous ground truth.
ArXiv, 2023.
[ArXiv | Project Page]
David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, Yuan Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam.
Evaluating AI systems under uncertain ground truth: a case study in dermatology.
ArXiv, 2023.
[ArXiv | Project Page]
David Stutz, Krishnamurthy (Dj) Dvijotham, Ali Taylan Cemgil, Arnaud Doucet.
Learning Optimal Conformal Classifiers.
ICLR, 2022.
[ArXiv | OpenReview | Project Page]
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators.
TPAMI, 2022.
[ArXiv | IEEExplore | Project Page]
David Stutz, Matthias Hein, Bernt Schiele.
Relating Adversarially Robust Generalization to Flat Minima.
ICCV, 2021.
[ArXiv | Project Page]
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele.
On Mitigating Random and Adversarial Bit Errors.
MLSys, 2021.
[ArXiv | BibTeX | Project Page]
David Stutz, Matthias Hein, Bernt Schiele.
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks.
ICML, 2020.
[ArXiv | BibTeX | Project Page]
David Stutz, Andreas Geiger.
Learning 3D Shape Completion under Weak Supervision.
International Journal of Computer Vision, 2020.
[DOI | ArXiv | BibTeX | Project Page]
David Stutz, Matthias Hein, Bernt Schiele.
Disentangling Adversarial Robustness and Generalization.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
[ArXiv | BibTeX | Project Page]
David Stutz, Andreas Geiger.
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
[PDF | BibTeX | Project Page]
David Stutz, Alexander Hermans, Bastian Leibe.
Superpixels: an evaluation of the state-of-the-art.
Computer Vision and Image Understanding, Volume 166, 2018.
[DOI | ArXiv | PDF | BibTeX | Project Page]
David Stutz.
Superpixel segmentation: an evaluation.
German Conference on Pattern Recognition, 2015.
[PDF | BibTeX | Project Page]
Conformal prediction uses a held-out, labeled set of examples to calibrate a classifier to yield confidence sets that include the true label with user-specified probability. But what happens if even experts disagree on the ground truth labels. Commonly, this is resolved by taking the majority voted label from multiple expert. However, in difficult and ambiguous tasks, the majority voted label might be misleading and a bad representation of the underlying true posterior distribution. In this paper, we introduce Monte Carlo conformal prediction which allows to perform conformal calibration directly against expert opinions or aggregate statistics thereof.
In supervised machine learning, we usually assume access to ground truth label for evaluation. In many applications, however, these ground truth labels are derived from expert opinions. Disagreement among these experts is typically ignored using simple majority voting or averaging. Unfortunately, this can have severe consequences by over-estimating performance or mis-guiding model selection. In our work presented in this article, we tackle this problem by introducing a statistical framework for aggregating expert opinions.
Achieving accuracy, fair and private image classification.
Conformal calibration with uncertain ground truth.
Evaluating AI models with uncertain ground truth.
Report of the 2020 Max Planck PhDNet survey results.
While batch normalization has long been argued to increase adversarial vulnerability, it is still used in state-of-the-art adversarial training models. This is likely because of easier training and increased expressiveness. At the same time, recent papers argue that adversarial examples are partly caused by fragile features caused by learning spurious correlations. In this paper, we study the impact of batch normalization on utilizing these fragile features for robustness by fine-tuning only the batch normalization layers.
RESEARCH Fragile Features, Batch Normalization and Adversarial Training Outline Abstract Paper Poster News & Updates This is work led by Nils Walter. Quick links: Paper | Poster Abstract Modern deep learning architecture utilize batch normalization (BN) to stabilize training and improve accuracy. It has been shown that the BN layers alone are surprisingly expressive. In […]
Improving corruption and adversarial robustness by enhancing weak sub-networks.
Conformal prediction (CP) allows to take any classifier and turn it into a set predictor with a guarantee that the true class is included with user-specified probability. This allows to develop classifiers with sufficient guarantees for safe deployment in many domains. However, CP is usually used as a post-training calibration step. Our paper presented in this article presents a training procedure name conformal training allowing to train classifier and conformal predictor end-to-end. This can reduce the average confidence set size and allows to optimize arbitrary objectives defined directly on the predicted sets.