Hoshen and Peleg show that specific arithmetic tasks, such as addition and subtraction, can be learned from images in an end-to-end fashion by neural networks while others, such as multiplication, cannot. Their work can mainly be seen as proof of concept. In particular, Hoshen and Peleg reference several theoretical works (including ), in order to discuss the results. In  it is shown that operations implemented in $O(n)$ using a Turing machine, can be realized using a neural network with $O(n)$ layers and $O(n^2)$ units with sample complexity $O(n^2)$. This result is partly used as justification for the neural network being unable to learn multiplication.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below or using the following platforms: