Check out our latest research on adversarial robustness and generalization of deep networks.

I am a PhD student at the Max Planck Institute for Informatics supervised by Prof. Bernt Schiele and Prof. Matthias Hein . My research interests lie in computer vision and deep learning. Specifically, I am interested in adversarial examples — imperceptibly perturbed images fooling deep neural networks; by understanding the phenomenon of their existence, I want to make deep learning more reliable.

Previously, I obtained both a bachelor and a master degree from RWTH Aachen University . For my master thesis, I worked on weakly-supervised 3D shape completion under the supervision of Prof. Andreas Geiger from the Max Planck Institute for Intelligent Systems and received the STEM-Award IT 2018 . As part of my master degree, I also had the opportunity to spend a semester at Georgia Tech working with Prof. Irfan Essa on video segmentation. For my bachelor thesis, I worked on superpixel segmentation under the supervision of Prof. Bastian Leibe .

Over the last few years, I worked for RS Computer , Fraunhofer FKIE , the Computer Vision Group and MATHCCES at RWTH Aachen University , Fyusion , MOBIS and Microsoft . Occasionally, I still do consulting and web development.

On this blog you will find articles, reading notes and some projects — which can also be found on GitHub or ShortScience . Here's my CV , some mission statements , as well as LinkedIn , Xing , and Google Scholar profiles.


Feel free to get in touch:

What I've been up to ...

In September, I was honored to receive the MINT-Award IT 2018, sponsored by ZF and audimax, for my master thesis on weakly-supervised shape completion. For CVPR 2019, however, I am working on a different topic: adversarial robustness and generalization of deep neural networks.18thOCTOBER2018
In the last few months, I finished my work on weakly-supervised 3D shape completion with a CVPR 2018 paper as well as a follow-up journal submission. This means that I will be visiting CVPR this year. Afterwards, I plan to focus on robustness of deep neural networks — for example: why do adversarial examples exist and how can we defend deep neural networks against them? To answer these questions, I will also be visiting MLSS 2018 later this summer.22thMAY2018