IAM

DAVIDSTUTZ

TAG»COMPUTER VISION«

ARTICLE

ICML Paper “Confidence-Calibrated Adversarial Training”

Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.

More ...

ARTICLE

ArXiv Pre-Print “On Mitigating Random and Adversarial Bit Errors”

Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.

More ...

26thJUNE2020

PROJECT

Random and adversarial bit errors in quantized DNN weights.

More ...

ARTICLE

ArXiv Pre-Print “Adversarial Training against Location-Optimized Adversarial Patches”

While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.

More ...

06thMAY2020

PROJECT

Adversarial training on location-optimized adversarial patches.

More ...