IAM

TAG»UNCERTAINTY ESTIMATION«

ARTICLE

ArXiv Pre-Print “Learning Optimal Conformal Classifiers”

Conformal prediction (CP) allows to take any classifier and turn it into a set predictor with a guarantee that the true class is included with user-specified probability. This allows to develop classifiers with sufficient guarantees for safe deployment in many domains. However, CP is usually used as a post-training calibration step. Our paper presented in this article presents a training procedure name conformal training allowing to train classifier and conformal predictor end-to-end. This can reduce the average confidence set size and allows to optimize arbitrary objectives defined directly on the predicted sets.

More ...

OCTOBER2021

PROJECT

End-to-end training of deep neural networks and conformal predictors to reduce confidence set size and optimizer application-specific objectives.

More ...

ARTICLE

Qualcomm Innovation Fellowship Talk “Confidence-Calibrated Adversarial Training and Random Bit Error Training”

As part of the Qualcomm Innovation Fellowship 2019, I have a talk on the research produced throughout the academic year 2019/2020. This talk covers two exciting works on robustness: robustness against various types of adversarial examples using confidence-calibrated adversarial training (CCAT) and robustness against bit errors in the model’s quantized weights. The latter can be shown to be important to reduce the energy-consumption of accelerators for neural networks. In this article, I want to share the slides corresponding to the talk.

More ...

ARTICLE

ICML Talk “Confidence-Calibrated Adversarial Training”

Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples unseen during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.

More ...

ARTICLE

ICML Paper “Confidence-Calibrated Adversarial Training”

Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.

More ...

ARTICLE

Code Released: Confidence-Calibrated Adversarial Training

The code for my latest paper on confidence-calibrated adversarial training has been released on GitHub. The repository does not only include a PyTorch implementation of confidence-calibrated adversarial training, but also several white- and black box attacks to generate adversarial examples and the proposed confidence-thresholded robust test error. Furthermore, these implementations are fully tested and allow to reproduce the results from the paper. This article gives an overview of the repository and highlights its features and components.

More ...

MARCH2020

PROJECT

Confidence calibration of adversarial training for “generalizable” robustness.

More ...

ARTICLE

Talk on Confidence-Calibrated Adversarial Training at BCAI and Tübingen AI Center

Recently, I had the opportunity to present my work on confidence-calibrated adversarial training at the Bosch Center for Artifical Intelligence and the University of Tübingen, specifically, the newly formed Tübingen AI Center. As part of the talk, I outlined the motivation and strengths of confidence-calibrated adversarial training compared to standard adversarial training: robustness against previously unseen attacks and improved accuracy. I also touched on the difficulties faced during robustness evaluation. This article provides the corresponding slides and gives a short overview of the talk.

More ...

ARTICLE

Updated ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training yields robust models against a specific threat model. However, robustness does not generalize to larger perturbations or threat models not seen during training. Confidence-calibrated adversarial training tackles this problem by biasing the network towards low-confidence predictions on adversarial examples. Through rejecting low-confidence (adversarial) examples, robustness generalizes to various threat models, including L2, L1 and L0 while training only on L∞ adversarial examples. This article gives a short abstract, discusses relevant updates to the previous version and includes paper and appendix.

More ...

ARTICLE

A Short Introduction to Bayesian Neural Networks

With the rising success of deep neural networks, their reliability in terms of robustness (for example, against various kinds of adversarial examples) and confidence estimates becomes increasingly important. Bayesian neural networks promise to address these issues by directly modeling the uncertainty of the estimated network weights. In this article, I want to give a short introduction of training Bayesian neural networks, covering three recent approaches.

More ...