IAM

TAG»COMPUTER VISION«

ARTICLE

Recorded ICCV’21 Talk “Relating Adversarially Robust Generalization to Flat Minima”

In October this year, my work on relating adversarially robust generalization to flat minima in the (robust) loss surface with respect to weight perturbations was presented at ICCV’21. As oral presentation at ICCV’21, I recorded a 12 minute talk highlighting the main insights how (robust) flatness can avoid robust overfitting of adversarial training and improve robustness against adversarial examples. In this article, I want to share the recording.

More ...

JULY2021

PROJECT

Random and adversarial bit error robustness of DNNs for energy-efficient and secure DNN accelerators.

More ...

JULY2021

PROJECT

Robust generalization and overfitting linked to flatness of robust loss surface in weight space.

More ...

ARTICLE

Qualcomm Innovation Fellowship Talk “Confidence-Calibrated Adversarial Training and Random Bit Error Training”

As part of the Qualcomm Innovation Fellowship 2019, I have a talk on the research produced throughout the academic year 2019/2020. This talk covers two exciting works on robustness: robustness against various types of adversarial examples using confidence-calibrated adversarial training (CCAT) and robustness against bit errors in the model’s quantized weights. The latter can be shown to be important to reduce the energy-consumption of accelerators for neural networks. In this article, I want to share the slides corresponding to the talk.

More ...

ARTICLE

Recorded CVPR’21 CV-AML Workshop Outstanding Paper Talk “Bit Error Robustness for Energy-Efficient DNN Accelerators”

In June this year, my work on bit error robustness of deep neural networks (DNNs) was recognized as outstanding paper at the CVPR’21 Workshop on Adversarial Machine Learning in Real-World Computer Vision Systems and Online Challenges (AML-CV). Thus, as part of the workshop, I prepared a 15 minute talk highlighting how robustness against bit errors in DNN weights can improve the energy-efficiency of DNN accelerators. In this article, I want to share the recording.

More ...

ARTICLE

ArXiv Pre-Print “Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators”

Deep neural network (DNN) accelerators are popular due to reduced cost and energy compared to GPUs. To further reduce energy consumption, the operating voltage of the on-chip memory can be reduced. However, this injects random bit errors, directly impacting the (quantized) DNN weights. As result, improving DNN robustness against these bit errors can significantly improve energy efficiency. Similarly, these chips are subject to bit-level hardware- or software-based attacks. In this case, robustness against adversarial bit errors is required to improve security of DNN accelerators. Our paper presented in this article addresses both problems.

More ...

ARTICLE

ArXiv Pre-Print “Relating Adversarially Robust Generalization to Flat Minima”

Recent work on robustness againt adversarial examples identified a severe problem in adversarial training: (robust) overfitting. That is, during training the training robustness continuously increases, while test robustness starts decreasing eventually. In this pre-print, we relate robust overfitting and good robust generalization to flatness around the found minimum in the robust loss landscape with respect to perturbations in the weights.

More ...

ARTICLE

Talk at TU Dortmund “Random and Adversarial Bit Error Robustness of DNNs”

In April, I was invited to talk about my work on random or adversarial bit error robustness of (quantized) deep neural networks in Katharina Morik’s group at TU Dortmund. The talk is motivated by DNN accelerators, specialized chips for DNN inference. In order to reduce energy-efficiency, DNNs are required to be robust to random bit errors occurring in the quantized weights. Moreover, RowHammer-like attacks require robustness against adversarial bit errors, as well. While a recording is not available, this article shares the slides used for the presentation.

More ...

ARTICLE

Updated Pre-Print “Bit Error Robustness for Energy-Efficient DNN Accelerators “

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

More ...

ARTICLE

What Lp Adversarial Examples make Sense on Common Vision Datasets?

Adversarial examples are intended to be imperceptible perturbations that cause mis-classification while not changing the true class. Still, there is no consensus on what changes are considered imperceptible or when the true class actually changes — or is not recognizable anymore. In this article, I want to explore what levels of $L_\infty$, $L_0$ and $L_1$ adversarial noise actually make sense on popular computer vision datasets such as MNIST, Fashion-MNIST, SVHN or Cifar10.

More ...