Dumoulin and Visin discuss the details of convolutional layers usually not found in textbooks or publications. While this may sound trivial, this may proof useful when reasoning about filter sizes, padding, strides, pooling as well as upsampling. In particular, the discussion of transposed convolutional layers is interesting. The corresponding illustrations — provided in the corresponding GitHub repository can only be recommended. Also see a related question on StackExchange Data Science.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below or using the following platforms: