IAM

TAG»DEEP LEARNING«

ARTICLE

On-Manifold Adversarial Examples

Adversarial examples, imperceptibly perturbed examples causing mis-classification, are commonly assumed to lie off the underlying manifold of the data — the so-called manifold assumption. In this article, following my recent CVPR’19 paper, I demonstrate that adversarial examples can also be found on the data manifold, both on a synthetic dataset as well as on MNIST and Fashion-MNIST.

More ...

ARTICLE

FONTS: A Synthetic MNIST-Like Dataset with Known Manifold

In deep learning and computer vision, data is often assumed to lie on a low-dimensional manifold, embedded within the potentially high-dimensional input space — as, for example, for images. However, the manifold is usually not known which hinders deeper understanding of many phenomena in deep learning, such as adversarial examples. Based on my recent CVPR’19 paper, I want to present FONTS, a MNIST-like, synthetically created dataset with known manifold to study adversarial example.

More ...

ARTICLE

240+ Papers on Adversarial Examples and Out-of-Distribution Detection

In the last few months, there were at least 50 papers per month related to adversarial examples — on ArXiv alone. While not all of them might meet the high bar of conferences such as ICLR, ICML or NeurIPS regarding their contributions and experiments, it becomes more and more difficult to stay on top of the literature. In this article, I want to share a categorized list of more than 240 papers on adversarial examples and related topics.

More ...

ARTICLE

A Short Introduction to Bayesian Neural Networks

With the rising success of deep neural networks, their reliability in terms of robustness (for example, against various kinds of adversarial examples) and confidence estimates becomes increasingly important. Bayesian neural networks promise to address these issues by directly modeling the uncertainty of the estimated network weights. In this article, I want to give a short introduction of training Bayesian neural networks, covering three recent approaches.

More ...

ARTICLE

AI and Deep Learning at the 7th Heidelberg Laureate Forum 2019

The Heidelberg Laureate Forum brings together young researchers and laureates in computer science and mathematics. During lectures, workshops, panel discussions and social events, the forum fosters personal and scientific exchange with other young researchers as well as laureates. I was incredibly lucky to have the opportunity to participate in the 7th Heidelberg Laureate Forum 2019. In this article, I want to give a short overview of the forum and share some of my impressions.

More ...

ARTICLE

More Examples for Working with Torch

This article is a short follow-up on my initial collection of examples for getting started with Torch. In the meanwhile, through a series of additional articles, the corresponding GitHub repository has grown, including not only basic examples but also more advanced examples such as variational auto-encoders, generative adversarial networks or adversarial auto encoders. This article aims to provide a short overview of the added examples.

More ...