IAM

TAG»COMPUTER VISION«

ARTICLE

ArXiv Pre-Print “Relating Adversarially Robust Generalization to Flat Minima”

Recent work on robustness againt adversarial examples identified a severe problem in adversarial training: (robust) overfitting. That is, during training the training robustness continuously increases, while test robustness starts decreasing eventually. In this pre-print, we relate robust overfitting and good robust generalization to flatness around the found minimum in the robust loss landscape with respect to perturbations in the weights.

More ...

ARTICLE

Talk at TU Dortmund “Random and Adversarial Bit Error Robustness of DNNs”

In April, I was invited to talk about my work on random or adversarial bit error robustness of (quantized) deep neural networks in Katharina Morik’s group at TU Dortmund. The talk is motivated by DNN accelerators, specialized chips for DNN inference. In order to reduce energy-efficiency, DNNs are required to be robust to random bit errors occurring in the quantized weights. Moreover, RowHammer-like attacks require robustness against adversarial bit errors, as well. While a recording is not available, this article shares the slides used for the presentation.

More ...

ARTICLE

Updated Pre-Print “Bit Error Robustness for Energy-Efficient DNN Accelerators “

Recently, deep neural network (DNN) accelerators have received considerable attention due to reduced cost and energy compared to mainstream GPUs. In order to further reduce energy consumption, the included memory (storing weights and intermediate computations) is operated at low voltage. However, this causes bit errors in memory cells, directly impacting the stored (quantized) DNN weights. This results in a significant decrease in CNN accuracy. In this paper, we tackle the problem of DNN robustness against random bit errors. By using a robust fixed-point quantization, training with aggressive weight clipping as regularization and injecting random bit errors during training, we increase robustness significantly, allowing energy-efficient DNN accelerators.

More ...

ARTICLE

What Lp Adversarial Examples make Sense on Common Vision Datasets?

Adversarial examples are intended to be imperceptible perturbations that cause mis-classification while not changing the true class. Still, there is no consensus on what changes are considered imperceptible or when the true class actually changes — or is not recognizable anymore. In this article, I want to explore what levels of $L_\infty$, $L_0$ and $L_1$ adversarial noise actually make sense on popular computer vision datasets such as MNIST, Fashion-MNIST, SVHN or Cifar10.

More ...

ARTICLE

ICML Talk “Confidence-Calibrated Adversarial Training”

Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples unseen during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.

More ...

ARTICLE

ICML Paper “Confidence-Calibrated Adversarial Training”

Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.

More ...

ARTICLE

ArXiv Pre-Print “On Mitigating Random and Adversarial Bit Errors”

Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.

More ...

JUNE2020

PROJECT

Random and adversarial bit errors in quantized DNN weights.

More ...

ARTICLE

ArXiv Pre-Print “Adversarial Training against Location-Optimized Adversarial Patches”

While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.

More ...