Conformal calibration with uncertain ground truth.
Evaluating AI models with uncertain ground truth.
Taking adversarial training from this previous article as baseline, this article introduces a new, confidence-calibrated variant of adversarial training that addresses two significant flaws: First, trained with L∞ adversarial examples, adversarial training is not robust against L2 ones. Second, it incurs a significant increase in (clean) test error. Confidence-calibrated adversarial training addresses these problems by encouraging lower confidence on adversarial examples and subsequently rejecting them.
In March this year I finally submitted my PhD thesis and successfully defended in July. Now, more than 6 months later, my thesis is finally available in the university’s library. During my PhD, I worked on various topics surrounding robustness and uncertainty in deep learning, including adversarial robustness, robustness to bit errors, out-of-distribution detection and conformal prediction. In this article, I want to share my thesis and give an overview of its contents.
PhD thesis on uncertainty estimation and (adversarial) robustness in deep learning.
In July this year I finally defended my PhD which mainly focused on (adversarial) robustness and uncertainty estimation in deep learning. In my case, the defense consisted of a (public) 30 minute talk about my work, followed by questions from the thesis committee and audience. In this article, I want to share the slides and some lessons learned in preparing for my defense.
The code for our ICLR’22 paper on learning optimal conformal classifiers is now available on GitHub. The repository not only includes our implementation of conformal training but also relevant baselines such as coverage training and several conformal predictors for evaluation. Furthermore, it allows to reproduce the majority of experiments from the paper.
Last week, I had the pleasure to give a talk at the recently started Seminar on Distribution-Free Statistics organized by Anastasios Angelopoulos. Specifically, I talked about conformal training, a procedure allowing to train a classifier and conformal predictor end-to-end. This allows to optimize arbitrary losses defined directly on the confidence sets obtained through conformal prediction and can be shown to improve inefficiency and other metrics for any conformal predictor used at test time. In this article, I want to share the corresponding recording.
Conformal prediction (CP) allows to take any classifier and turn it into a set predictor with a guarantee that the true class is included with use-specified probability. This allows to develop classifiers with sufficient guarantees for safe deployment in many domains. However, CP is usually used as a post-training calibration step. Our paper presented in this article presents a training procedure name conformal training allowing to train classifier and conformal predictor end-to-end. This can reduce the average confidence set size and allows to optimize arbitrary objectives defined directly on the predicted sets.
End-to-end training of deep neural networks and conformal predictors to reduce confidence set size and optimizer application-specific objectives.