V. Dumoulin, F. Visin. A guide to convolution arithmetic for deep learning. CoRR, 2016.

Dumoulin and Visin discuss the details of convolutional layers usually not found in textbooks or publications. While this may sound trivial, this may proof useful when reasoning about filter sizes, padding, strides, pooling as well as upsampling. In particular, the discussion of transposed convolutional layers is interesting. The corresponding illustrations — provided in the corresponding GitHub repository can only be recommended. Also see a related question on StackExchange Data Science.

What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.