IAM

TAG»TALK«

2023
Conformal Prediction under Ambiguous Ground Truth, KCL Informatics (Invited Talk). [Slides]

Conformal Prediction under Ambiguous Ground Truth, UC Berkeley (Invited Talk). [Slides]

2023
Introduction to Conformal Prediction, UCL Statistics (Invited Talk). [Slides]

Evaluating and Calibrating AI Models Under Uncertain Ground Truth, University of Pennsylvania PRECISE Seminar (Invited Talk). [Slides]

Conformal Training and Conformal Prediction with Ambiguous Ground Truth, Vanderbilt Machine Learning Seminar Series (Invited Talk). [Slides]

Conformal Training and Conformal Prediction with Ambiguous Ground Truth, StatML — University of Oxford and Imperial College London CDT (Invited Talk). [Slides]

Learning Optimal Conformal Classifiers, Vanderbilt Machine Learning Seminar Series (Invited Talk). [Slides]

2022
Learning Optimal Conformal Classifiers, DELTA Lab, UCL (Invited Talk). [Slides]

Learning Optimal Conformal Classifiers, Dataiku (Invited Talk). [Slides]

Learning Optimal Conformal Classifiers, ICLR. [Slides]

2021
Relating Adversarially Robust Generalization to Flat Minima, MLSec – PraLab, University of Cagliari (Invited Talk). [Recording]

Conformal Training: Learning Optimcal Conformal Classifiers, International Seminar on Distribution-Free Statistics (Invited Talk). [Recording]

Adversarial Robustness, Weight Robustness and Flatness, Math Machine Learning seminar MPI MiS + UCLA (Invited Talk). [Recording]

Relating Adversarial Robustness and Flat Minima, ICCV. [Recording]

Random Bit Errors for Energy-Efficient DNN Accelerators, CVPR CV-AML Workshop (Outstanding Paper Talk). [Recording]

Random Bit Errors for Energy-Efficient DNN Accelerators, MLSys. [Recording]

Random and Adversarial Bit Error Robustness of DNNs, TU Dortmund (Invited Talk). [Slides]

Confidence-Calibrated Adversarial Training and Bit Error Robustness for Energy-Efficient DNNs, Lorentz Center Workshop on Robust Artificial Intelligence (Invited Talk). [Recording]

2020
Bit Error Robustness for Energy-Efficient DNN Accelerators, IBM Research Workshop on the Future of Computing Architectures (Invited Talk). [Recording]

Confidence-Calibrated Adversarial Training / Mitigating Random Bit Errors in Quantized Weights, Qian Xuesen Laboratory (China Academy of Space Technology, Invited Talk). [Slides]

Confidence-Calibrated Adversarial Training / Mitigating Random Bit Errors in Quantized Weights, Qualcomm (Invited Talk, Part of Qualcomm Innovation Fellowship). [Slides]

Confidence-Calibrated Adversarial Training, ICML Workshop on Uncertainty and Robustness in Deep Learning (Contributed Talk).

Confidence-Calibrated Adversarial Training, ICML. [Recording]

Confidence-Calibrated Adversarial Training, University of Tübingen (Invited Talk). [Slides]

Confidence-Calibrated Adversarial Training, Bosch Center for AI (Invited Talk). [Slides]

2019
Disentangling Adversarial Robustness and Generalization, ICML Workshop on Uncertainty and Robustness in Deep Learning (Spotlight).
2018
Weakly-Supervised Shape Completion, International Max Planck Research School for Computer Science.

Weakly-Supervised Shape Completion, ZF Friedrichshafen (Invited Talk, Part of MINT Award IT 2018, German).

2017
Benchmarking Superpixel Algorithms / Weakly-Supervised Shape-Completion, Max Planck Institute for Informatics. [Slides]

Weakly-Supervised Shape Completion, Max Planck Institute for Intelligent Systems (Master Thesis Talk). [Slides]

Weakly-Supervised Shape Completion, RWTH Aachen University (Master Thesis Talk). [Slides]

ARTICLE

Vanderbilt Machine Learning Seminar Talk “Conformal Prediction under Ambiguous Ground Truth”

Last week, I presented our work on Monte Carlo conformal prediction — conformal prediction with ambiguous and uncertain ground truth — at the Vanderbilt Machine Learning Seminar Series. In this work, we show how to adapt standard conformal prediction if there are no unique ground truth labels available due to disagreement among experts during annotation. In this article, I want to share the slides of my talk.

More ...

ARTICLE

PRECISE Seminar Talk “Evaluating and Calibrating AI Models with Uncertain Ground Truth”

I had the pleasure to present our work on evaluating and calibrating with uncertain ground truth at the seminar series of the PRECISE center at the University of Pennsylvania. Besides talking about our recent papers on evaluating AI models in health with uncertain ground truth and conformal prediction with uncertain ground truth, I also got to learn more about the research at PRECISE through post-doc and student presentations. In this article, I want to share the corresponding slides.

More ...

NOVEMBER2022

PROJECT

Tutorials for (deep convolutional) neural networks.

More ...

NOVEMBER2022

PROJECT

PhD thesis on uncertainty estimation and (adversarial) robustness in deep learning.

More ...

ARTICLE

PhD Defense Slides and Lessons Learned

In July this year I finally defended my PhD which mainly focused on (adversarial) robustness and uncertainty estimation in deep learning. In my case, the defense consisted of a (public) 30 minute talk about my work, followed by questions from the thesis committee and audience. In this article, I want to share the slides and some lessons learned in preparing for my defense.

More ...

ARTICLE

Machine Learning Security Seminar Talk “Relating Adversarially Robust Generalization to Flat Minima”

This week I was honored to speak at the Machine Learning Security Seminar organized by the Pattern Recognition and Applications Lab at University of Cagliari. I presented my work on relating adversarial robustness to flatness in the robust loss landscape, also touching on the relationship to weight robustness. In this article, I want to share the recording and slides of this talk.

More ...

ARTICLE

International Seminar on Distribution-Free Statistics Talk “Conformal Training: Learning Optimal Conformal Classifiers”

Last week, I had the pleasure to give a talk at the recently started Seminar on Distribution-Free Statistics organized by Anastasios Angelopoulos. Specifically, I talked about conformal training, a procedure allowing to train a classifier and conformal predictor end-to-end. This allows to optimize arbitrary losses defined directly on the confidence sets obtained through conformal prediction and can be shown to improve inefficiency and other metrics for any conformal predictor used at test time. In this article, I want to share the corresponding recording.

More ...

ARTICLE

Math Machine Learning Seminar of MPI MiS and UCLA Talk “Relating Adversarial Robustness and Weight Robustness Through Flatness”

In October, I had the pleasure to present my recent work on adversarial robustness and flat minima at the math machine learning seminar of MPI MiS and UCLA organized by Guido Montúfar. The talk covers several aspects of my PhD research on adversarial robustness and robustness in terms of the model weights. This article shares abstract and recording of the talk.

More ...

ARTICLE

Recorded ICCV’21 Talk “Relating Adversarially Robust Generalization to Flat Minima”

In October this year, my work on relating adversarially robust generalization to flat minima in the (robust) loss surface with respect to weight perturbations was presented at ICCV’21. As oral presentation at ICCV’21, I recorded a 12 minute talk highlighting the main insights how (robust) flatness can avoid robust overfitting of adversarial training and improve robustness against adversarial examples. In this article, I want to share the recording.

More ...