IAM

TAG»DEEP LEARNING«

JANUARY2024

PROJECT

Adversarial training on a subset of (difficult) examples to improve efficiency.

More ...

JANUARY2024

PROJECT

Introducing slack control to improve accuracy and certified robustness with Lipschitz regularization.

More ...

JANUARY2024

PROJECT

Robustifying token attention for vision transformers.

More ...

JANUARY2024

PROJECT

Improve patch robustness of vision transformers.

More ...

ARTICLE

Interviewed by AI Coffee Break with Letitia

While attending the Heidelberg Laureate Forum this year, I got to meet Letitia Parcalabescu who is running a YouTube channel called the AI Coffee Break. Among other topics, we talked abou my PhD research on adversarial robustness. Part of our conversasion can now be found on her YouTube channel.

More ...

ARTICLE

Benchmarking Bit Errors in Quantized Neural Networks with PyTorch

Similar to my article series on adversarial robustness, I was planning to have a series on bit errors robustness accompanied by PyTorch code. Instead, due to time constraints, I decided to condense the information into a single article. The code for the originally planned six articles is available on GitHub.

More ...

ARTICLE

Awarded DAGM MVTec Dissertation Award 2023

In September, I received the DAGM MVTec dissertation award 2023 for my PhD thesis. DAGM is the German association for pattern recognition and organizes the German Conference on Pattern Recognition (GCPR) which is Germany’s prime conference for computer vision and related research areas. I feel particularly honored by this award since my academic career started with my first paper published as part of the young researcher forum at GCPR 2015 in Aachen.

More ...

ARTICLE

Simple Adversarial Transformations in PyTorch

Another alternative to the regular Lp-constrained adversarial examples that is additionally less visible than adversarial patches or frames are adversarial transformations such as small crops, rotations and translations. Similar to Lp adversarial examples, adversarial transformations are often less visible unless the original image is available for direct comparison. In this article, I will include a PyTorch implementation and some results against adversarial training.

More ...

ARTICLE

Adversarial Patches and Frames in PyTorch

Adversarial patches and frames are an alternative to the regular $L_p$-constrained adversarial examples. Often, adversarial patches are thought to be more realistic — mirroring graffitis or stickers in the real world. In this article I want to discuss a simple PyTorch implementation and present some results of adversarial patches against adversarial training as well as confidence-calibrated adversarial training.

More ...

ARTICLE

Distal Adversarial Examples Against Neural Networks in PyTorch

Out-of-distribution examples are images that are cearly irrelevant to the task at hand. Unfortunately, deep neural networks frequently assign random labels with high confidence to such examples. In this article, I want to discuss an adversarial way of computing high-confidence out-of-distribution examples, so-called distal adversarial examples, and how confidence-calibrated adversarial training handles them.

More ...