30^{th}JULY2018

Carl-Johann Simon-Gabriel, Yann Ollivier, Bernhard Schölkopf, Léon Bottou, David Lopez-Paz. *Adversarial Vulnerability of Neural Networks Increases With Input Dimension*. CoRR abs/1802.01421, 2018.

Also find this summary on ShortScience.org.

What is **your opinion** on the summarized work? Or do you know related work that is of interest? **Let me know** your thoughts in the comments below or get in touch with me:

Simon-Gabriel et al. study the robustness of neural networks with respect to the input dimensionality. Their main hypothesis is that the vulnerability of neural networks against adversarial perturbations increases with the input dimensionality. To support this hypothesis, they provide a theoretical analysis as well as experiments.

The general idea of robustness is that small perturbations $\delta$ of the input $x$ do only result in small variations $\delta \mathcal{L}$ of the loss:

$\delta \mathcal{L} = \max_{\|\delta\| \leq \epsilon} |\mathcal{L}(x + \delta) - \mathcal{L}(x)| \approx \max_{\|\delta\| \leq \epsilon} |\partial_x \mathcal{L} \cdot \delta| = \epsilon \||\partial_x \mathcal{L}\||$

where the approximation is due to a first-order Taylor expansion and $\||\cdot\||$ is the dual norm of $\|\cdot\|$. As a result, the vulnerability of networks can be quantified by considering $\epsilon\mathbb{E}_x\||\partial_x \mathcal{L}\||$. A natural regularizer to increase robustness (i.e. decrease vulnerability) would be $\epsilon \||\partial_x \mathcal{L}\||$ which is a similar regularizer as proposed in [1].

The remainder of the paper studies the norm $\|\partial_x \mathcal{L}\|$ with respect to the input dimension $d$. Specifically, they show that for most feedforward networks, the gradient norm increases monotonically with the input dimension, almost independently of the network-architecture. I refer to the paper for the exact theorems and proofs. This claim is based on the assumption of non-trained networks that have merely been initialized. However, in experiments, they show that the conclusion may hold true in realistic settings, e.g. on ImageNet.