This week I was honored to speak at the Machine Learning Security Seminar organized by the Pattern Recognition and Applications Lab at University of Cagliari. I presented my work on relating adversarial robustness to flatness in the robust loss landscape, also touching on the relationship to weight robustness. In this article, I want to share the recording and slides of this talk.
Last week, I had the pleasure to give a talk at the recently started Seminar on Distribution-Free Statistics organized by Anastasios Angelopoulos. Specifically, I talked about conformal training, a procedure allowing to train a classifier and conformal predictor end-to-end. This allows to optimize arbitrary losses defined directly on the confidence sets obtained through conformal prediction and can be shown to improve inefficiency and other metrics for any conformal predictor used at test time. In this article, I want to share the corresponding recording.
The code for my ICCV’21 paper relating adversarial robustness to flatness in the (robust) loss landscape is now available on GitHub. The repository includes implementations of various adversarial attacks, adversarial training variants and “attacks” on model weights in order to measure robust flatness.
In October, I had the pleasure to present my recent work on adversarial robustness and flat minima at the math machine learning seminar of MPI MiS and UCLA organized by Guido Montúfar. The talk covers several aspects of my PhD research on adversarial robustness and robustness in terms of the model weights. This article shares abstract and recording of the talk.
Conformal prediction (CP) allows to take any classifier and turn it into a set predictor with a guarantee that the true class is included with use-specified probability. This allows to develop classifiers with sufficient guarantees for safe deployment in many domains. However, CP is usually used as a post-training calibration step. Our paper presented in this article presents a training procedure name conformal training allowing to train classifier and conformal predictor end-to-end. This can reduce the average confidence set size and allows to optimize arbitrary objectives defined directly on the predicted sets.
End-to-end training of deep neural networks and conformal predictors to reduce confidence set size and optimizer application-specific objectives.
In October this year, my work on relating adversarially robust generalization to flat minima in the (robust) loss surface with respect to weight perturbations was presented at ICCV’21. As oral presentation at ICCV’21, I recorded a 12 minute talk highlighting the main insights how (robust) flatness can avoid robust overfitting of adversarial training and improve robustness against adversarial examples. In this article, I want to share the recording.
Random and adversarial bit error robustness of DNNs for energy-efficient and secure DNN accelerators.
Robust generalization and overfitting linked to flatness of robust loss surface in weight space.
As part of the Qualcomm Innovation Fellowship 2019, I have a talk on the research produced throughout the academic year 2019/2020. This talk covers two exciting works on robustness: robustness against various types of adversarial examples using confidence-calibrated adversarial training (CCAT) and robustness against bit errors in the model’s quantized weights. The latter can be shown to be important to reduce the energy-consumption of accelerators for neural networks. In this article, I want to share the slides corresponding to the talk.