This series collects every-day problems when doing research and development with Ubuntu; this time: comfortably monitoring GPU usage, clarifying issues with
CUDA_VISIBLE_DEVICES and HDF5 for Torch.
An example of a convolutional variational auto-encoder for fixed-size rectangles in $24 \times 24$ images with different anchors; convolutional variant of the snippet presented here: Variational Auto-Encoder in Torch. The variational auto-encoder is able to learn a $2$-dimensional code shown in the interpolations below the listing. The example can easily be adapted to more complex data.
An example of a variational auto-encoder for fixed-size rectangles in $24 \times 24$ images with different anchors. The variational auto-encoder is able to learn a $2$-dimensional code shown in the interpolations below the listing. The example can easily be adapted to more complex data.
This article is a collection of Torch examples meant as introduction to get started with Lua and Torch for deep learning research. The examples can also be considered individually and cover common use cases such as training on CPU and GPU, weight initialization and visualization, custom modules and criteria as well as saving and fine-tuning models.
An example of fine-tuning an auto-encoder for classification. The example demonstrates how arbitrary modules can easily be extended to fix the weights and/or biases after loading a model. Additionally it shows how weights and biases can manually be copied between models with a different structure.
Minimal example of defining a custom Torch module on a custom data structure. This example defines a simple data structure wrapping two Torch tensors and defines a linear
nn.Module to operate on this data structure. While the backward pass is not implemented, the example illustrates how Torch can be extended for deep learning on custom data structures.
Simple LUA package to manually initialize the weights and biases of a network in Torch according to different strategies — these include uniform and normal initialization as well as heuristic and Xavier initialization. The package is easily extended to include additional initialization schemes and allows to initialize weights and biases using different strategies.