IAM

ARTICLE

What Lp Adversarial Examples make Sense on Common Vision Datasets?

Adversarial examples are intended to be imperceptible perturbations that cause mis-classification while not changing the true class. Still, there is no consensus on what changes are considered imperceptible or when the true class actually changes — or is not recognizable anymore. In this article, I want to explore what levels of $L_\infty$, $L_0$ and $L_1$ adversarial noise actually make sense on popular computer vision datasets such as MNIST, Fashion-MNIST, SVHN or Cifar10.

More ...

ARTICLE

ICML Talk “Confidence-Calibrated Adversarial Training”

Confidence-calibrated adversarial training (CCAT) addresses two problems when training on adversarial examples: the lack of robustness against adversarial examples unseen during training, and the reduced (clean) accuracy. In particular, CCAT biases the model towards predicting low-confidence on adversarial examples such that adversarial examples can be rejected by confidence thresholding. In this article, I want to share the slides of the corresponding ICML talk.

More ...

ARTICLE

ICML Paper “Confidence-Calibrated Adversarial Training”

Our paper on confidence-calibrated adversarial training was accepted at ICML’20. In the revised paper, the proposed confidence-calibrated adversarial training tackles the problem of obtaining robustness that generalizes to attacks not seen during training. This is achieved by biasing the network towards low-confidence predictions on adversarial examples and rejecting these low-confidence examples at test time. This article gives a short abstract and includes paper and code.

More ...

ARTICLE

ArXiv Pre-Print “On Mitigating Random and Adversarial Bit Errors”

Deep neural network (DNN) accelerators are specialized hardware for inference and have received considerable attention in the past years. Here, in order to reduce energy consumption, these accelerators are often operated at low voltage which causes the included accelerator memory to become unreliable. Additionally, recent work demonstrated attacks targeting individual bits in memory. The induced bit errors in both cases can cause significantly reduced accuracy of DNNs. In this paper, we tackle both random (due to low-voltage) and adversarial bit errors in DNNs. By explicitly taking such errors into account during training, wecan improve robustness significantly.

More ...

ARTICLE

Illustrating (Convolutional) Neural Networks in LaTeX with TikZ

Many papers and theses provide high-level overviews of the proposed methods. Nowadays, in computer vision, natural language processing or similar research areas strongly driven by deep learning, these illustrations commonly include architectures of the used (convolutional) neural network. In this article, I want to provide a collection of examples using LaTeX and TikZ to produce nice figures of (convolutional) neural networks. All the discussed examples can also be found on GitHub.

More ...

ARTICLE

ArXiv Pre-Print “Adversarial Training against Location-Optimized Adversarial Patches”

While robustness against imperceptible adversarial examples is well-studied, robustness against visible adversarial perturbations such as adversarial patches is poorly understood. In this pre-print, we present a practical approach to obtain adversarial patches while actively optimizing their location within the image. On Cifar10 and GTSRB, we show that adversarial training on these location-optimized adversarial patches improves robustness significantly while not reducing accuracy.

More ...

ARTICLE

Implementing Custom PyTorch Tensor Operations in C and CUDA

PyTorch, alongside TensorFlow, has become standard among deep learning researchers and practitioners. While PyTorch provides a large variety in terms of tensor operations or deep learning layers, some specialized operations still need to be implemented manually. In cases where runtime is crucial, this should be done in C or CUDA for supporting both CPU and GPU computation. In this article, I want to provide a simple example and framework for extending PyTorch with custom C and CUDA operations using CFFI for Python and CuPy.

More ...

ARTICLE

Adversarial Training Has Higher Sample Complexity

Training on adversarial examples generated on-the-fly, so-called adversarial training, improves robustness against adversarial examples while incurring a significant drop in accuracy. This apparent trade-off between robustness and accuracy has been observed on many datasets and is argued to be inherent to adversarial training — or even unavoidable. In this article, based on my recent CVPR’19 paper, I show experimental results indicating that adversarial training can achieve the same accuracy as normal training, if more training examples are available. This suggests that adversarial training has higher sample complexity.

More ...

ARTICLE

On-Manifold Adversarial Training for Boosting Generalization

As outlined in previous articles, there seems to be a significant difference between regular, unconstrained adversarial examples and adversarial examples constrained to the data manifold. In this article, I want to demonstrate that adversarial training with on-manifold adversarial examples has the potential to improve generalization if the manifold is known or approximated well enough. As alternative, for more complex datasets, knowledge of parts of the manifold is sufficient, leading to a kind of adversarial data augmentation using affine transformations.

More ...

ARTICLE

Adversarial Examples Leave the Data Manifold

Adversarial examples are commonly assumed to leave the manifold of the underyling data — although this has not been confirmed experimentally so far. This means that deep neural networks perform well on the manifold, however, slight perturbations in directions leaving the manifold may cause mis-classification. In this article, based on my recent CVPR’19 paper, I want to empirically show that adversarial examples indeed leave the manifold. For this purpose, I will present results on a synthetic dataset with known manifold as well as on MNIST with approximated manifold.

More ...