IAM

DAVIDSTUTZ

I am looking for full-time (applied) research opportunities in industry, involving (trustworthy and robust) machine learning or (3D) computer vision, starting early 2022. Check out my CV and get in touch on LinkedIn!

ARTICLE

Math Machine Learning Seminar of MPI MiS and UCLA Talk “Relating Adversarial Robustness and Weight Robustness Through Flatness”

In October, I had the pleasure to present my recent work on adversarial robustness and flat minima at the math machine learning seminar of MPI MiS and UCLA organized by Guido Montúfar. The talk covers several aspects of my PhD research on adversarial robustness and robustness in terms of the model weights. This article shares abstract and recording of the talk.

Abstract

Despite their outstanding performance, deep neural networks (DNNs) are susceptible to adversarial examples, imperceptibly perturbed examples causing mis-classification. Similarly, but less studied, DNNs are fragile in terms of perturbations in their weights. This talk highlights my recent research on both input and weight robustness and investigates how both problems are related. On the subject of adversarial examples, I discuss a confidence-calibrated version of adversarial training that allows to obtain robustness beyond the adversarial perturbations seen during training. Next, regarding weight robustness, I address robustness against random bit errors in the (quantized) weights which plays an important role in improving the energy-efficiency of DNN accelerators. Surprisingly, improved weight robustness can also be beneficial in terms of robustness against adversarial examples. Specifically, weight robustness can be thought of as flatness in the loss landscape with respect to perturbations of the weights. Using an intuitive flatness measure for adversarially trained DNNs, I demonstrate that flatness in the weight loss landscape improves adversarial robustness and helps to avoid robust overfitting.

Papers covered:

David Stutz, Matthias Hein, Bernt Schiele. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. ICML, 2020.
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele. Bit Error Robustness for Energy-Efficient DNN Accelerators. MLSys, 2021.
David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele. Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators. ArXiv, 2021.
David Stutz, Matthias Hein, Bernt Schiele. Relating Adversarially Robust Generalization to Flat Minima. ICCV, 2021.

Recording

The original recording can be found on the seminar's webpage:

Talk Recording

What is your opinion on this article? Did you find it interesting or useful? Let me know your thoughts in the comments below: