Our paper on adversarial robustness and generalization was accepted at CVPR’19. In the revised paper, we show that adversarial examples usually leave the manifold, including a brief theoretical argumentation. Similarly, adversarial examples can be found on the manifold; then, robustness is nothing else than generalization. For (off-manifold) adversarial examples, in contrast, we show that generalization and robustness are not necessarily contradicting objectives. As example, on synthetic data, we adversarially train a robust and accurate model. This article gives a short abstract and provides the paper including appendix.
Obtaining deep networks robust against adversarial examples is a widely open problem. While many papers are devoted to training more robust deep networks, a clear definition of adversarial examples has not been agreed upon. In this article, I want to discuss two very simple toy examples illustrating the necessity of a proper definition of adversarial examples.
To date, it is unclear whether we can obtain both accurate and robust deep networks — meaning deep networks that generalize well and resist adversarial examples. In this pre-print, we aim to disentangle the relationship between adversarial robustness and generalization. The paper is available on ArXiv.
Our CVPR’18 follow-up paper has been accepted at IJCV. In this longer paper we extend our weakly-supervised 3D shape completion approach to obtain high-quality shape predictions, and also present updated, synthetic benchmarks on ShapeNet and ModelNet. The paper is available through Springer Link and ArXiv.
In September, I received the STEM-Award IT 2018 for the best master thesis on autonomous driving. The award with the topic “On The Road to Vision Zero” was sponsored by ZF, audimax and MINT Zukunft Schaffen. The jury specifically highlighted the high scientific standard of my master thesis “Learning 3D Shape Completion under Weak Supervision”.
Based on the Torch implementation of a vanilla variational auto-encoder in a previous article, this article discusses an implementation of a denoising variational auto-encoder. While the theory of denoising variational auto-encoders is more involved, an implementation merely requires a suitable noise model.
After introducing the mathematics of variational auto-encoders in a previous article, this article presents an implementation in LUA using Torch. The main challenge when implementing variational auto-encoders are the Kullback-Leibler divergence as well as the reparameterization sampler. Here, both are implemented as separate
A variational auto-encoder trained on corrupted (that is, noisy) examples is called denoising variational auto-encoder. While easily implemented, the underlying mathematical framework changes significantly. As the second article in my series on variational auto-encoders, this article discusses the mathematical background of denoising variational auto-encoders.
In the third article of my series on variational auto-encoders, I want to discuss categorical variational auto-encoders. This variant allows to learn a latent space of discrete (e.g. categorical or Bernoulli) latent variables. Compared to regular variational auto-encoders, the main challenge lies in deriving a working reparameterization trick for discrete latent variables — the so-called Gumbel trick.