Following the documentation, this snippet illustrates the implementation of a simple auto-encoder using Torch’s
Following the Theano documentation, this snippet illustrates the creation of a new Theano type, namely the Double type. Based on this type, the add operation is implemented. Originally, I intended this as a quick tutorial on how to define more complex types with differentiable operations. However, as also discussed here, this turned out to be more involved than expected.
Following the PyTorch documentation, this snippet illustrates how to extend PyTorch by manually adding a linear neural network module. The example includes the linear module as discussed in the documentation and an example application on linearly separable data.
Slightly adapted example for adding new operations in Tensorflow taken from the official documentation. The files should be copied to
tensorflow/core/user_ops. The new operation is compiled using
bazel build -c opt //tensorflow/core/user_ops:zero_out.so from the Tensorflow root. The generated
.so file can usually be found by searching
bazel-bin. This code does not include the corresponding gradient function yet.
pre_get_posts filter to exclude specific categories — by their IDs — on the home page.
In this article, I discuss a simple Tensorflow operation implemented in C++. While the example mostly builds upon the official documentation, it includes trainable parameters and the gradient computation is implemented in C++, as well. As such, the example is slightly more complex compared to the simple
ZeroOut operation discussed in the documentation.