21^{th}FEBRUARY2019

Obtaining deep networks robust against adversarial examples is a widely open problem. While many papers are devoted to training more robust deep networks, a clear definition of adversarial examples has not been agreed upon. In this article, I want to discuss two very simple toy examples illustrating the necessity of a proper definition of adversarial examples.

15^{th}FEBRUARY2019

04^{th}DECEMBER2018

To date, it is unclear whether we can obtain both accurate and robust deep networks — meaning deep networks that generalize well and resist adversarial examples. In this pre-print, we aim to disentangle the relationship between adversarial robustness and generalization. The paper is available on ArXiv.

29^{th}AUGUST2018

After introducing the mathematics of variational auto-encoders in a previous article, this article presents an implementation in LUA using Torch. The main challenge when implementing variational auto-encoders are the Kullback-Leibler divergence as well as the reparameterization sampler. Here, both are implemented as separate `nn`

modules.

18^{th}JULY2018

Adversarial examples are test images which have been perturbed slightly to cause misclassification. As these adversarial examples are usually unproblematic for us humans, but are able to easily fool deep neural networks, their discovery has sparked quite some interest in the deep learning and privacy/security communities. In this article, I want to provide a rough overview of the topic including a brief survey of relevant literature and some ideas on future research directions.

25^{th}JUNE2018

Last week, I attended my very first CVPR in Salt Lake City, where I also presented my work on weakly-supervised 3D shape completion. In the course of the week, I attended several tutorials as well as all oral and poster sessions. In this article, I want to share my notes and some general comments.