# DAVIDSTUTZ

13thAUGUST2018

Fawzi et al. provide upper bounds on the robustness of linear and quadratic classifiers. As many modern neural networks can be seen as piece-wise linear functions (using convolutions, $\text{ReLU}$ activations and max pooling), I found the part of linear functions most interesting. Specifically, robustness is defined as the expected smallest perturbations needed to change the label of a sample (computed as the average of smallest perturbations for each sample in a set). Motivated by a toy example (which I found quite artificial and not very practical), the authors show that the robustness $\rho(f)$ of the classifier $f$ is bounded as follows:
$\rho(f) \leq \frac{1}{2}\|\mathbb{E}_{\mu_1}[x] - \mathbb{E}_{\mu_{-1}}[x]\|_2 + 2MR(f)$
where $\mu_1$ and $\mu_{-1}$ are the data distributions of classes $1$ and $-1$, all samples are bounded by $\|x\|_2 \leq M$ and $R(f)$ is the risk of the classifier. Here, the authors assume that the binary classification task is balanced. Personally, the exact form is of less importance; however, the two parts of the bound are interesting – i.e. the rist on the one hand and the separability of the distributions on the other hand. This presents a dilemma as minimizing risk will also minimize robustness …