IAM

ARTICLE

Awarded DAGM MVTec Dissertation Award 2023

In September, I received the DAGM MVTec dissertation award 2023 for my PhD thesis. DAGM is the German association for pattern recognition and organizes the German Conference on Pattern Recognition (GCPR) which is Germany’s prime conference for computer vision and related research areas. I feel particularly honored by this award since my academic career started with my first paper published as part of the young researcher forum at GCPR 2015 in Aachen.

More ...

ARTICLE

Simple Adversarial Transformations in PyTorch

Another alternative to the regular Lp-constrained adversarial examples that is additionally less visible than adversarial patches or frames are adversarial transformations such as small crops, rotations and translations. Similar to Lp adversarial examples, adversarial transformations are often less visible unless the original image is available for direct comparison. In this article, I will include a PyTorch implementation and some results against adversarial training.

More ...

ARTICLE

Adversarial Patches and Frames in PyTorch

Adversarial patches and frames are an alternative to the regular $L_p$-constrained adversarial examples. Often, adversarial patches are thought to be more realistic — mirroring graffitis or stickers in the real world. In this article I want to discuss a simple PyTorch implementation and present some results of adversarial patches against adversarial training as well as confidence-calibrated adversarial training.

More ...

ARTICLE

Distal Adversarial Examples Against Neural Networks in PyTorch

Out-of-distribution examples are images that are cearly irrelevant to the task at hand. Unfortunately, deep neural networks frequently assign random labels with high confidence to such examples. In this article, I want to discuss an adversarial way of computing high-confidence out-of-distribution examples, so-called distal adversarial examples, and how confidence-calibrated adversarial training handles them.

More ...

ARTICLE

Proper Robustness Evaluation of Confidence-Calibrated Adversarial Training in PyTorch

Properly evaluating defenses against adversarial examples has been difficult as adversarial attacks need to be adapted to each individual defense. This also holds for confidence-calibrated adversarial training, where robustness is obtained by rejecting adversarial examples based on their confidence. Thus, regular robustness metrics and attacks are not easily applicable. In this article, I want to discuss how to evaluate confidence-calibrated adversarial training in terms of metrics and attacks.

More ...

ARTICLE

Guest on Jay Shah’s Machine Learning Podcast

Recently, I had the opportunity to be a guest on Jay Shah’s podcast where he regularly talks to machine learning professionals from industry and academia. We had a great conversation about my PhD research and topics surrounding a successful career in machine learning — finding a good PhD program and research topic, preparing for interviews in industry, etc.

More ...

ARTICLE

Generalizing Adversarial Robustness with Confidence-Calibrated Adversarial Training in PyTorch

Taking adversarial training from this previous article as baseline, this article introduces a new, confidence-calibrated variant of adversarial training that addresses two significant flaws: First, trained with L adversarial examples, adversarial training is not robust against L2 ones. Second, it incurs a significant increase in (clean) test error. Confidence-calibrated adversarial training addresses these problems by encouraging lower confidence on adversarial examples and subsequently rejecting them.

More ...

ARTICLE

47.9% Robust Test Error on CIFAR10 with Adversarial Training and PyTorch

Knowing how to compute adversarial examples from this previous article, it would be ideal to train models for which such adversarial examples do not exist. This is the goal of developing adversarially robust training procedures. In this article, I want to describe a particularly popular approach called adversarial training. The idea is to train on adversarial examples computed during training on-the-fly. I will also discuss a PyTorch implementation that obtains 47.9% robust test error — 52.1% robust accuracy — on CIFAR10 using a WRN-28-10 architecture.

More ...

ARTICLE

Some Research Ideas for Conformal Training

With our paper on conformal training, we showed how conformal prediction can be integrated into end-to-end training pipelines. There are so many interesting directions of how to improve and build upon conformal training. Unfortunately, I just do not have the bandwidth to pursue all of them. So, in this article, I want to share some research ideas so others can pick them up.

More ...

ARTICLE

Lp Adversarial Examples using Projected Gradient Descent in PyTorch

Adversarial examples, slightly perturbed images causing mis-classification, have received considerable attention over the last few years. While many different adversarial attacks have been proposed, projected gradient descent (PGD) and its variants is widely spread for reliable evaluation or adversarial training. In this article, I want to present my implementation of PGD to generate L, L2, L1 and L0 adversarial examples. Besides using several iterations and multiple attempts, the worst-case adversarial example across all iterations is returned and momentum as well as backtracking strengthen the attack.

More ...