IAM

TAG»COMPUTER VISION«

ARTICLE

Convolutional Batch Normalization for OctNets

During my master thesis I partly worked on OctNets, octree-bases convolutional neural networks for efficient learning in 3D. Among others, I implemented convolutional batch normalization for OctNets. This article briefly discusses the implementation, which will be available on GitHub.

More ...

ARTICLE

Visualizing Occupancy Grids, Meshes and Point Clouds using Blender and Python

Obtaining high-quality visualizations of 3D data such as triangular meshes or occupancy grids, as needed for publications in computer graphics and computer vision, is difficult. In this article, I want to present a GitHub repository containing some utility scripts for paper-ready visualizations of meshes and occupancy grids using Blender and Python.

More ...

04thDECEMBER2018

PROJECT

Disentangling the relationship between adversarial robustness and generalization.

More ...

ARTICLE

IJCV Paper “Learning 3D Shape Completion under Weak Supervision”

Our CVPR’18 follow-up paper has been accepted at IJCV. In this longer paper we extend our weakly-supervised 3D shape completion approach to obtain high-quality shape predictions, and also present updated, synthetic benchmarks on ShapeNet and ModelNet. The paper is available through Springer Link and ArXiv.

More ...

ARTICLE

STEM-Award IT 2018 First Price

In September, I received the STEM-Award IT 2018 for the best master thesis on autonomous driving. The award with the topic “On The Road to Vision Zero” was sponsored by ZF, audimax and MINT Zukunft Schaffen. The jury specifically highlighted the high scientific standard of my master thesis “Learning 3D Shape Completion under Weak Supervision”.

More ...

ARTICLE

Denoising Variational Auto-Encoder in Torch

Based on the Torch implementation of a vanilla variational auto-encoder in a previous article, this article discusses an implementation of a denoising variational auto-encoder. While the theory of denoising variational auto-encoders is more involved, an implementation merely requires a suitable noise model.

More ...

ARTICLE

Bernoulli Variational Auto-Encoder in Torch

After formally introducing the concept of categorical variational auto-encoders in a previous article, this article presents a practical Torch implementation of variational auto-encoders with Bernoulli latent variables.

More ...

ARTICLE

Variational Auto-Encoder in Torch

After introducing the mathematics of variational auto-encoders in a previous article, this article presents an implementation in LUA using Torch. The main challenge when implementing variational auto-encoders are the Kullback-Leibler divergence as well as the reparameterization sampler. Here, both are implemented as separate nn modules.

More ...

ARTICLE

Denoising Variational Auto-Encoders

A variational auto-encoder trained on corrupted (that is, noisy) examples is called denoising variational auto-encoder. While easily implemented, the underlying mathematical framework changes significantly. As the second article in my series on variational auto-encoders, this article discusses the mathematical background of denoising variational auto-encoders.

More ...

ARTICLE

Categorical Variational Auto-Encoders and the Gumbel Trick

In the third article of my series on variational auto-encoders, I want to discuss categorical variational auto-encoders. This variant allows to learn a latent space of discrete (e.g. categorical or Bernoulli) latent variables. Compared to regular variational auto-encoders, the main challenge lies in deriving a working reparameterization trick for discrete latent variables — the so-called Gumbel trick.

More ...