IAM

ARTICLE

On-Manifold Adversarial Training for Boosting Generalization

As outlined in previous articles, there seems to be a significant difference between regular, unconstrained adversarial examples and adversarial examples constrained to the data manifold. In this article, I want to demonstrate that adversarial training with on-manifold adversarial examples has the potential to improve generalization if the manifold is known or approximated well enough. As alternative, for more complex datasets, knowledge of parts of the manifold is sufficient, leading to a kind of adversarial data augmentation using affine transformations.

More ...

ARTICLE

Adversarial Examples Leave the Data Manifold

Adversarial examples are commonly assumed to leave the manifold of the underyling data — although this has not been confirmed experimentally so far. This means that deep neural networks perform well on the manifold, however, slight perturbations in directions leaving the manifold may cause mis-classification. In this article, based on my recent CVPR’19 paper, I want to empirically show that adversarial examples indeed leave the manifold. For this purpose, I will present results on a synthetic dataset with known manifold as well as on MNIST with approximated manifold.

More ...

ARTICLE

Code Released: Confidence-Calibrated Adversarial Training

The code for my latest paper on confidence-calibrated adversarial training has been released on GitHub. The repository does not only include a PyTorch implementation of confidence-calibrated adversarial training, but also several white- and black box attacks to generate adversarial examples and the proposed confidence-thresholded robust test error. Furthermore, these implementations are fully tested and allow to reproduce the results from the paper. This article gives an overview of the repository and highlights its features and components.

More ...

ARTICLE

Talk on Confidence-Calibrated Adversarial Training at BCAI and Tübingen AI Center

Recently, I had the opportunity to present my work on confidence-calibrated adversarial training at the Bosch Center for Artifical Intelligence and the University of Tübingen, specifically, the newly formed Tübingen AI Center. As part of the talk, I outlined the motivation and strengths of confidence-calibrated adversarial training compared to standard adversarial training: robustness against previously unseen attacks and improved accuracy. I also touched on the difficulties faced during robustness evaluation. This article provides the corresponding slides and gives a short overview of the talk.

More ...

ARTICLE

Updated ArXiv Pre-Print “Confidence-Calibrated Adversarial Training”

Adversarial training yields robust models against a specific threat model. However, robustness does not generalize to larger perturbations or threat models not seen during training. Confidence-calibrated adversarial training tackles this problem by biasing the network towards low-confidence predictions on adversarial examples. Through rejecting low-confidence (adversarial) examples, robustness generalizes to various threat models, including L2, L1 and L0 while training only on L∞ adversarial examples. This article gives a short abstract, discusses relevant updates to the previous version and includes paper and appendix.

More ...

ARTICLE

On-Manifold Adversarial Examples

Adversarial examples, imperceptibly perturbed examples causing mis-classification, are commonly assumed to lie off the underlying manifold of the data — the so-called manifold assumption. In this article, following my recent CVPR’19 paper, I demonstrate that adversarial examples can also be found on the data manifold, both on a synthetic dataset as well as on MNIST and Fashion-MNIST.

More ...

ARTICLE

FONTS: A Synthetic MNIST-Like Dataset with Known Manifold

In deep learning and computer vision, data is often assumed to lie on a low-dimensional manifold, embedded within the potentially high-dimensional input space — as, for example, for images. However, the manifold is usually not known which hinders deeper understanding of many phenomena in deep learning, such as adversarial examples. Based on my recent CVPR’19 paper, I want to present FONTS, a MNIST-like, synthetically created dataset with known manifold to study adversarial example.

More ...

ARTICLE

240+ Papers on Adversarial Examples and Out-of-Distribution Detection

In the last few months, there were at least 50 papers per month related to adversarial examples — on ArXiv alone. While not all of them might meet the high bar of conferences such as ICLR, ICML or NeurIPS regarding their contributions and experiments, it becomes more and more difficult to stay on top of the literature. In this article, I want to share a categorized list of more than 240 papers on adversarial examples and related topics.

More ...

ARTICLE

A Short Introduction to Bayesian Neural Networks

With the rising success of deep neural networks, their reliability in terms of robustness (for example, against various kinds of adversarial examples) and confidence estimates becomes increasingly important. Bayesian neural networks promise to address these issues by directly modeling the uncertainty of the estimated network weights. In this article, I want to give a short introduction of training Bayesian neural networks, covering three recent approaches.

More ...

ARTICLE

AI and Deep Learning at the 7th Heidelberg Laureate Forum 2019

The Heidelberg Laureate Forum brings together young researchers and laureates in computer science and mathematics. During lectures, workshops, panel discussions and social events, the forum fosters personal and scientific exchange with other young researchers as well as laureates. I was incredibly lucky to have the opportunity to participate in the 7th Heidelberg Laureate Forum 2019. In this article, I want to give a short overview of the forum and share some of my impressions.

More ...