In September, I was honored to receive the STEM-Award IT 2018 for the best master thesis on autonomous driving. The award with the topic “On The Road to Vision Zero” was sponsored by ZF, audimax and MINT Zukunft Schaffen. The jury specifically highlighted the high scientific standard of my master thesis “Learning 3D Shape Completion under Weak Supervision”.
Based on the Torch implementation of a vanilla variational auto-encoder in a previous article, this article discusses an implementation of a denoising variational auto-encoder. While the theory of denoising variational auto-encoders is more involved, an implementation merely requires a suitable noise model.
After introducing the mathematics of variational auto-encoders in a previous article, this article presents an implementation in LUA using Torch. The main challenge when implementing variational auto-encoders are the Kullback-Leibler divergence as well as the reparameterization sampler. Here, both are implemented as separate
A variational auto-encoder trained on corrupted (that is, noisy) examples is called denoising variational auto-encoder. While easily implemented, the underlying mathematical framework changes significantly. As the second article in my series on variational auto-encoders, this article discusses the mathematical background of denoising variational auto-encoders.
In the third article of my series on variational auto-encoders, I want to discuss categorical variational auto-encoders. This variant allows to learn a latent space of discrete (e.g. categorical or Bernoulli) latent variables. Compared to regular variational auto-encoders, the main challenge lies in deriving a working reparameterization trick for discrete latent variables — the so-called Gumbel trick.
As part of my master thesis, I made heavy use of variational auto-encoders in order to learn latent spaces of shapes — to later perform shape completion. Overall, I invested a big portion of my time in understanding and implementing different variants of variational auto-encoders. This article, a first in a small series, will deal with the mathematics behind variational auto-encoders. The article covers variational inference in general, the concrete case of variational auto-encoder as well as practical considerations.
Torch is a framework for scientific computing in LUA. However, it has mostly been used for deep learning research as it provides efficient and comfortable C/CUDA implementations of a wide range of (convolutional and/or recurrent) neural network components. In this article, I want to provide a code template allowing to easily extend
torch.nn by custom modules implemented in C and/or CUDA without knowledge of Torch’s core.
Recently proposed neural network architectures, including PointNets and PointSetGeneration networks, allow deep learning on unordered point clouds. In this article, I present a Torch implementation of a PointNet auto-encoder — a network allowing to reconstruct point clouds through a lower-dimensional bottleneck. As loss during training, I implemented a symmetric Chamfer distance in C/CUDA and provide the code on GitHUb.
Adversarial examples are test images which have been perturbed slightly to cause misclassification. As these adversarial examples are usually unproblematic for us humans, but are able to easily fool deep neural networks, their discovery has sparked quite some interest in the deep learning and privacy/security communities. In this article, I want to provide a rough overview of the topic including a brief survey of relevant literature and some ideas on future research directions.