IAM

TAG»DEEP LEARNING«

NOVEMBER2022

PROJECT

Torch/CUDA implementation of batch normalization for OctNets.

More ...

NOVEMBER2022

PROJECT

PhD thesis on uncertainty estimation and (adversarial) robustness in deep learning.

More ...

ARTICLE

PhD Defense Slides and Lessons Learned

In July this year I finally defended my PhD which mainly focused on (adversarial) robustness and uncertainty estimation in deep learning. In my case, the defense consisted of a (public) 30 minute talk about my work, followed by questions from the thesis committee and audience. In this article, I want to share the slides and some lessons learned in preparing for my defense.

More ...

AUGUST2022

PROJECT

Examples, tools and resources for using Caffe’s Python interface pyCaffe.

More ...

AUGUST2022

PROJECT

A template for extending PyTorch using C/CUDA operations.

More ...

AUGUST2022

PROJECT

Basic and advanced torch examples, template for implementing custom C/CUDA modules and implementations of variational auto-encoders.

More ...

AUGUST2022

PROJECT

3D mesh fusion, voxelization and evaluation for computer vision research.

More ...

ARTICLE

ICML 2022 Art of Robustness Paper “On Fragile Features and Batch Normalization in Adversarial Training”

While batch normalization has long been argued to increase adversarial vulnerability, it is still used in state-of-the-art adversarial training models. This is likely because of easier training and increased expressiveness. At the same time, recent papers argue that adversarial examples are partly caused by fragile features caused by learning spurious correlations. In this paper, we study the impact of batch normalization on utilizing these fragile features for robustness by fine-tuning only the batch normalization layers.

More ...

AUGUST2022

PROJECT

RESEARCH Fragile Features, Batch Normalization and Adversarial Training Outline Abstract Paper Poster News & Updates This is work led by Nils Walter. Quick links: Paper | Poster Abstract Modern deep learning architecture utilize batch normalization (BN) to stabilize training and improve accuracy. It has been shown that the BN layers alone are surprisingly expressive. In […]

More ...

AUGUST2022

PROJECT

Improving corruption and adversarial robustness by enhancing weak sub-networks.

More ...