IAM

DAVIDSTUTZ

TAG»TALK«

2022
Learning Optimal Conformal Classifiers, DELTA Lab, UCL (Invited Talk).

Learning Optimal Conformal Classifiers, Dataiku (Invited Talk).

2021
Relating Adversarially Robust Generalization to Flat Minima, MLSec – PraLab, University of Cagliari (Invited Talk). [Recording]

Conformal Training: Learning Optimcal Conformal Classifiers, International Seminar on Distribution-Free Statistics (Invited Talk). [Recording]

Adversarial Robustness, Weight Robustness and Flatness, Math Machine Learning seminar MPI MiS + UCLA (Invited Talk). [Recording]

Relating Adversarial Robustness and Flat Minima, ICCV. [Recording]

Random Bit Errors for Energy-Efficient DNN Accelerators, CVPR CV-AML Workshop (Outstanding Paper Talk). [Recording]

Random Bit Errors for Energy-Efficient DNN Accelerators, MLSys. [Recording]

Random and Adversarial Bit Error Robustness of DNNs, TU Dortmund (Invited Talk). [Slides]

Confidence-Calibrated Adversarial Training and Bit Error Robustness for Energy-Efficient DNNs, Lorentz Center Workshop on Robust Artificial Intelligence (Invited Talk). [Recording]

2020
Bit Error Robustness for Energy-Efficient DNN Accelerators, IBM Research Workshop on the Future of Computing Architectures (Invited Talk). [Recording]

Confidence-Calibrated Adversarial Training / Mitigating Random Bit Errors in Quantized Weights, Qian Xuesen Laboratory (China Academy of Space Technology, Invited Talk).

Confidence-Calibrated Adversarial Training / Mitigating Random Bit Errors in Quantized Weights, Qualcomm (Invited Talk, Part of Qualcomm Innovation Fellowship). [Slides]

Confidence-Calibrated Adversarial Training, ICML Workshop on Uncertainty and Robustness in Deep Learning (Contributed Talk).

Confidence-Calibrated Adversarial Training, ICML. [Recording]

Confidence-Calibrated Adversarial Training, University of Tübingen (Invited Talk). [Slides]

Confidence-Calibrated Adversarial Training, Bosch Center for AI (Invited Talk). [Slides]

2019
Disentangling Adversarial Robustness and Generalization, ICML Workshop on Uncertainty and Robustness in Deep Learning (Spotlight).
2018
Weakly-Supervised Shape Completion, International Max Planck Research School for Computer Science.

Weakly-Supervised Shape Completion, ZF Friedrichshafen (Invited Talk, Part of MINT Award IT 2018, German).

2017
Benchmarking Superpixel Algorithms / Weakly-Supervised Shape-Completion, Max Planck Institute for Informatics. [Slides]

Weakly-Supervised Shape Completion, Max Planck Institute for Intelligent Systems (Master Thesis Talk). [Slides]

Weakly-Supervised Shape Completion, RWTH Aachen University (Master Thesis Talk). [Slides]

ARTICLE

Machine Learning Security Seminar Talk “Relating Adversarially Robust Generalization to Flat Minima”

This week I was honored to speak at the Machine Learning Security Seminar organized by the Pattern Recognition and Applications Lab at University of Cagliari. I presented my work on relating adversarial robustness to flatness in the robust loss landscape, also touching on the relationship to weight robustness. In this article, I want to share the recording and slides of this talk.

More ...

ARTICLE

International Seminar on Distribution-Free Statistics Talk “Conformal Training: Learning Optimal Conformal Classifiers”

Last week, I had the pleasure to give a talk at the recently started Seminar on Distribution-Free Statistics organized by Anastasios Angelopoulos. Specifically, I talked about conformal training, a procedure allowing to train a classifier and conformal predictor end-to-end. This allows to optimize arbitrary losses defined directly on the confidence sets obtained through conformal prediction and can be shown to improve inefficiency and other metrics for any conformal predictor used at test time. In this article, I want to share the corresponding recording.

More ...

ARTICLE

Math Machine Learning Seminar of MPI MiS and UCLA Talk “Relating Adversarial Robustness and Weight Robustness Through Flatness”

In October, I had the pleasure to present my recent work on adversarial robustness and flat minima at the math machine learning seminar of MPI MiS and UCLA organized by Guido Montúfar. The talk covers several aspects of my PhD research on adversarial robustness and robustness in terms of the model weights. This article shares abstract and recording of the talk.

More ...

ARTICLE

Recorded ICCV’21 Talk “Relating Adversarially Robust Generalization to Flat Minima”

In October this year, my work on relating adversarially robust generalization to flat minima in the (robust) loss surface with respect to weight perturbations was presented at ICCV’21. As oral presentation at ICCV’21, I recorded a 12 minute talk highlighting the main insights how (robust) flatness can avoid robust overfitting of adversarial training and improve robustness against adversarial examples. In this article, I want to share the recording.

More ...

ARTICLE

Qualcomm Innovation Fellowship Talk “Confidence-Calibrated Adversarial Training and Random Bit Error Training”

As part of the Qualcomm Innovation Fellowship 2019, I have a talk on the research produced throughout the academic year 2019/2020. This talk covers two exciting works on robustness: robustness against various types of adversarial examples using confidence-calibrated adversarial training (CCAT) and robustness against bit errors in the model’s quantized weights. The latter can be shown to be important to reduce the energy-consumption of accelerators for neural networks. In this article, I want to share the slides corresponding to the talk.

More ...

ARTICLE

Recorded CVPR’21 CV-AML Workshop Outstanding Paper Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In June this year, my work on bit error robustness of deep neural networks (DNNs) was recognized as outstanding paper at the CVPR’21 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV). Thus, as part of the workshop, I prepared a 15 minute talk highlighting how robustness against bit errors in DNN weights can improve the energy-efficiency of DNN accelerators. In this article, I want to share the recording.

More ...

ARTICLE

Recorded MLSys’21 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In this MLSys’21 paper, we consider the robustness of deep neural networks (DNN) against bit errors in their quantized weights. This is relevant in the context of DNN accelerators, i.e., specialized hardware for DNN inference: In order to reduce energy consumption, the accelerator’s memory may be operated at very low voltages. However, this induces exponentially increasing rates of bit errors that directly affect the DNN weights, reducing accuracy significantly. We propose a robust fixed-point quantization scheme, weight clipping as regularization during training and random bit error training to improve bit error robustness. This article shares my talk recorded for MLSys’21.

More ...

ARTICLE

Talk at TU Dortmund “Random and Adversarial Bit Error Robustness of DNNs”

In April, I was invited to talk about my work on random or adversarial bit error robustness of (quantized) deep neural networks in Katharina Morik’s group at TU Dortmund. The talk is motivated by DNN accelerators, specialized chips for DNN inference. In order to reduce energy-efficiency, DNNs are required to be robust to random bit errors occurring in the quantized weights. Moreover, RowHammer-like attacks require robustness against adversarial bit errors, as well. While a recording is not available, this article shares the slides used for the presentation.

More ...

ARTICLE

Recorded RobustAI Workshop Talk “Confidence-Calibrated Adversarial Training and Bit Error Robustness of DNNs”

In January, I had the opportunity to interact with many other robustness researchers from academia and industry at the Robust Artificial Intelligence Workshop. As part of the workshop, organized by Airbus AI Research and TNO (Netherlands applied research organization), I also prepared a presentation talking about two of my PhD projects: confidence-calibrated adversarial training (CCAT) and bit error robustness of neural networks to enable low-energy neural network accelerators. In this article, I want to share the presentation; all other talks from the workshop can be found here.

More ...

ARTICLE

Recorded FOCA’20 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In October this year, I was invited to talk at IBM’s FOCA workshop about my latest research on bit error robustness of (quantized) DNN weights. Here, the goal is to develop DNN accelerators capable to operating at low-voltage. However, lowering voltage induces bit errors in the accelerators’ memory. While such bit errors can be avoided through hardware mechanisms, such approaches are usually costly in terms of energy and area. Thus, training DNNs robust to such bit errors would enable low-voltage operation, reducing energy consumption, without the need for hardware techniques. In this 5-minute talk, I give a short overview.

More ...