IAM

DAVIDSTUTZ

ARTICLE

PhD Thesis on Robustness and Uncertainty in Deep Learning

In March this year I finally submitted my PhD thesis and successfully defended in July. Now, more than 6 months later, my thesis is finally available in the university’s library. During my PhD, I worked on various topics surrounding robustness and uncertainty in deep learning, including adversarial robustness, robustness to bit errors, out-of-distribution detection and conformal prediction. In this article, I want to share my thesis and give an overview of its contents.

More ...

ARTICLE

PhD Defense Slides and Lessons Learned

In July this year I finally defended my PhD which mainly focused on (adversarial) robustness and uncertainty estimation in deep learning. In my case, the defense consisted of a (public) 30 minute talk about my work, followed by questions from the thesis committee and audience. In this article, I want to share the slides and some lessons learned in preparing for my defense.

More ...

ARTICLE

How I Prepared for DeepMind and Google AI Research Internship Interviews in 2019

In 2019, I interviewed for research internships at DeepMind and Google AI. I have been asked repeatedly about my preparation for and experience with these interviews. As internship applications at DeepMind have been opened recently, I thought it could be valuable to summarize my experience and recommendations in this article.

More ...

ARTICLE

Code Released: Conformal Training

The code for our ICLR’22 paper on learning optimal conformal classifiers is now available on GitHub. The repository not only includes our implementation of conformal training but also relevant baselines such as coverage training and several conformal predictors for evaluation. Furthermore, it allows to reproduce the majority of experiments from the paper.

More ...

ARTICLE

ICML 2022 Art of Robustness Paper “On Fragile Features and Batch Normalization in Adversarial Training”

While batch normalization has long been argued to increase adversarial vulnerability, it is still used in state-of-the-art adversarial training models. This is likely because of easier training and increased expressiveness. At the same time, recent papers argue that adversarial examples are partly caused by fragile features caused by learning spurious correlations. In this paper, we study the impact of batch normalization on utilizing these fragile features for robustness by fine-tuning only the batch normalization layers.

More ...

ARTICLE

Machine Learning Security Seminar Talk “Relating Adversarially Robust Generalization to Flat Minima”

This week I was honored to speak at the Machine Learning Security Seminar organized by the Pattern Recognition and Applications Lab at University of Cagliari. I presented my work on relating adversarial robustness to flatness in the robust loss landscape, also touching on the relationship to weight robustness. In this article, I want to share the recording and slides of this talk.

More ...

ARTICLE

International Seminar on Distribution-Free Statistics Talk “Conformal Training: Learning Optimal Conformal Classifiers”

Last week, I had the pleasure to give a talk at the recently started Seminar on Distribution-Free Statistics organized by Anastasios Angelopoulos. Specifically, I talked about conformal training, a procedure allowing to train a classifier and conformal predictor end-to-end. This allows to optimize arbitrary losses defined directly on the confidence sets obtained through conformal prediction and can be shown to improve inefficiency and other metrics for any conformal predictor used at test time. In this article, I want to share the corresponding recording.

More ...

ARTICLE

Code Released: Adversarial Robust Generalization and Flatness

The code for my ICCV’21 paper relating adversarial robustness to flatness in the (robust) loss landscape is now available on GitHub. The repository includes implementations of various adversarial attacks, adversarial training variants and “attacks” on model weights in order to measure robust flatness.

More ...

ARTICLE

Math Machine Learning Seminar of MPI MiS and UCLA Talk “Relating Adversarial Robustness and Weight Robustness Through Flatness”

In October, I had the pleasure to present my recent work on adversarial robustness and flat minima at the math machine learning seminar of MPI MiS and UCLA organized by Guido Montúfar. The talk covers several aspects of my PhD research on adversarial robustness and robustness in terms of the model weights. This article shares abstract and recording of the talk.

More ...

ARTICLE

ArXiv Pre-Print “Learning Optimal Conformal Classifiers”

Conformal prediction (CP) allows to take any classifier and turn it into a set predictor with a guarantee that the true class is included with use-specified probability. This allows to develop classifiers with sufficient guarantees for safe deployment in many domains. However, CP is usually used as a post-training calibration step. Our paper presented in this article presents a training procedure name conformal training allowing to train classifier and conformal predictor end-to-end. This can reduce the average confidence set size and allows to optimize arbitrary objectives defined directly on the predicted sets.

More ...