A simple implementation of batch normalization in Tensorflow. Note that the implementation is meant for training — i.e. for testing on individual samples, the mean and variance should be fixed in advance. Also note that
tf.nn.moments is replaced by a custom method for computing mean and variance to avoid the problem described here.
In this series, I collect problems I come across when using Ubuntu for research and development. In this article: installing Bazel on Ubuntu and masking graphics cards from being considered by Tensorflow.
In specific cases,
tf.nn.moments cannot be run on the GPU (see here). This is problematic when training (convolutional) neural networks where moments are part of the computation graph (e.g. for normalization). This snippet is a simple work around, computing mean and variance along the provided dimensions manually.
Auto-encoder in Torch using Torch’s
optim package and GPU acceleration.