IAM

TAG»RECORDING«

ARTICLE

Machine Learning Security Seminar Talk “Relating Adversarially Robust Generalization to Flat Minima”

This week I was honored to speak at the Machine Learning Security Seminar organized by the Pattern Recognition and Applications Lab at University of Cagliari. I presented my work on relating adversarial robustness to flatness in the robust loss landscape, also touching on the relationship to weight robustness. In this article, I want to share the recording and slides of this talk.

More ...

ARTICLE

International Seminar on Distribution-Free Statistics Talk “Conformal Training: Learning Optimal Conformal Classifiers”

Last week, I had the pleasure to give a talk at the recently started Seminar on Distribution-Free Statistics organized by Anastasios Angelopoulos. Specifically, I talked about conformal training, a procedure allowing to train a classifier and conformal predictor end-to-end. This allows to optimize arbitrary losses defined directly on the confidence sets obtained through conformal prediction and can be shown to improve inefficiency and other metrics for any conformal predictor used at test time. In this article, I want to share the corresponding recording.

More ...

ARTICLE

Math Machine Learning Seminar of MPI MiS and UCLA Talk “Relating Adversarial Robustness and Weight Robustness Through Flatness”

In October, I had the pleasure to present my recent work on adversarial robustness and flat minima at the math machine learning seminar of MPI MiS and UCLA organized by Guido Montúfar. The talk covers several aspects of my PhD research on adversarial robustness and robustness in terms of the model weights. This article shares abstract and recording of the talk.

More ...

ARTICLE

Recorded ICCV’21 Talk “Relating Adversarially Robust Generalization to Flat Minima”

In October this year, my work on relating adversarially robust generalization to flat minima in the (robust) loss surface with respect to weight perturbations was presented at ICCV’21. As oral presentation at ICCV’21, I recorded a 12 minute talk highlighting the main insights how (robust) flatness can avoid robust overfitting of adversarial training and improve robustness against adversarial examples. In this article, I want to share the recording.

More ...

ARTICLE

Recorded CVPR’21 CV-AML Workshop Outstanding Paper Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In June this year, my work on bit error robustness of deep neural networks (DNNs) was recognized as outstanding paper at the CVPR’21 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV). Thus, as part of the workshop, I prepared a 15 minute talk highlighting how robustness against bit errors in DNN weights can improve the energy-efficiency of DNN accelerators. In this article, I want to share the recording.

More ...

ARTICLE

Recorded MLSys’21 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In this MLSys’21 paper, we consider the robustness of deep neural networks (DNN) against bit errors in their quantized weights. This is relevant in the context of DNN accelerators, i.e., specialized hardware for DNN inference: In order to reduce energy consumption, the accelerator’s memory may be operated at very low voltages. However, this induces exponentially increasing rates of bit errors that directly affect the DNN weights, reducing accuracy significantly. We propose a robust fixed-point quantization scheme, weight clipping as regularization during training and random bit error training to improve bit error robustness. This article shares my talk recorded for MLSys’21.

More ...

ARTICLE

Recorded RobustAI Workshop Talk “Confidence-Calibrated Adversarial Training and Bit Error Robustness of DNNs”

In January, I had the opportunity to interact with many other robustness researchers from academia and industry at the Robust Artificial Intelligence Workshop. As part of the workshop, organized by Airbus AI Research and TNO (Netherlands applied research organization), I also prepared a presentation talking about two of my PhD projects: confidence-calibrated adversarial training (CCAT) and bit error robustness of neural networks to enable low-energy neural network accelerators. In this article, I want to share the presentation; all other talks from the workshop can be found here.

More ...

ARTICLE

Recorded FOCA’20 Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In October this year, I was invited to talk at IBM’s FOCA workshop about my latest research on bit error robustness of (quantized) DNN weights. Here, the goal is to develop DNN accelerators capable to operating at low-voltage. However, lowering voltage induces bit errors in the accelerators’ memory. While such bit errors can be avoided through hardware mechanisms, such approaches are usually costly in terms of energy and area. Thus, training DNNs robust to such bit errors would enable low-voltage operation, reducing energy consumption, without the need for hardware techniques. In this 5-minute talk, I give a short overview.

More ...

ARTICLE

Recorded ICML’20 Talk “Confidence-Calibrated Adversarial Training”

In our ICML’20 paper, confidence-calibrated adversarial training (CCAT) addresses two problems of “regular” adversarial training. First, robustness against adversarial examples unseen during training is improved and second, clean accuracy is increased. CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. This article shares my talk on CCAT as recorded for ICML’20.

More ...