01^{st}MAY2021

In this MLSys’21 paper, we consider the robustness of deep neural networks (DNN) against bit errors in their *quantized* weights. This is relevant in the context of DNN accelerators, i.e., specialized hardware for DNN inference: In order to reduce energy consumption, the accelerator’s memory may be operated at very low voltages. However, this induces exponentially increasing rates of bit errors that directly affect the DNN weights, reducing accuracy significantly. We propose a robust fixed-point quantization scheme, weight clipping as regularization during training and random bit error training to improve bit error robustness. This article shares my talk recorded for MLSys’21.

30^{th}APRIL2021

In April, I was invited to talk about my work on random or adversarial bit error robustness of (quantized) deep neural networks in Katharina Morik’s group at TU Dortmund. The talk is motivated by DNN accelerators, specialized chips for DNN inference. In order to reduce energy-efficiency, DNNs are required to be robust to random bit errors occurring in the quantized weights. Moreover, RowHammer-like attacks require robustness against *adversarial* bit errors, as well. While a recording is not available, this article shares the slides used for the presentation.

19^{th}JANUARY2021

In January, I had the opportunity to interact with many other robustness researchers from academia and industry at the Robust Artificial Intelligence Workshop. As part of the workshop, organized by Airbus AI Research and TNO (Netherlands applied research organization), I also prepared a presentation talking about two of my PhD projects: confidence-calibrated adversarial training (CCAT) and bit error robustness of neural networks to enable low-energy neural network accelerators. In this article, I want to share the presentation; all other talks from the workshop can be found here.

18^{th}JANUARY2021

In October this year, I was invited to talk at IBM’s FOCA workshop about my latest research on bit error robustness of (quantized) DNN weights. Here, the goal is to develop DNN accelerators capable to operating at low-voltage. However, lowering voltage induces bit errors in the accelerators’ memory. While such bit errors can be avoided through hardware mechanisms, such approaches are usually costly in terms of energy and area. Thus, training DNNs robust to such bit errors would enable low-voltage operation, reducing energy consumption, without the need for hardware techniques. In this 5-minute talk, I give a short overview.

11^{th}JANUARY2021

In our ICML’20 paper, confidence-calibrated adversarial training (CCAT) addresses two problems of “regular” adversarial training. First, robustness against adversarial examples unseen during training is improved and second, clean accuracy is increased. CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. This article shares my talk on CCAT as recorded for ICML’20.

09^{th}NOVEMBER2020

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

04^{th}AUGUST2020

The code for our paper on adversarial patch training on location-optimized adversarial patches is now available on GitHub. The repository includes a PyTorch implementation of our adversarial patch attack with location optimization as well as an adversarial training routine. The experiments on Cifar10 and GTSRB presented in the paper can easily be reproduced.

21^{th}JULY2020

Adversarial examples are intended to be *imperceptible* perturbations that cause mis-classification while not changing the true class. Still, there is no consensus on what changes are considered imperceptible or when the true class actually changes — or is not recognizable anymore. In this article, I want to explore what levels of $L_\infty$, $L_0$ and $L_1$ adversarial noise actually make sense on popular computer vision datasets such as MNIST, Fashion-MNIST, SVHN or Cifar10.

03^{rd}JULY2020

Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples *unseen* during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.

01^{st}JULY2020

Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks *not* seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.