IAM

JULY2019

READING

Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, Yupeng Gao. Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models. ECCV 2018.

Su et al. present an extensive robustness study of 18 different ImageNet networks. Among these networks, popular architectures such as AlexNet, VGG, Inception or ResNet can be found. Their main result shows a trade-off between robustness accuracy. A possible explanation is that recent increases in accuracy are only possible when sacrificing network robustness. In particular, as shown in Figure 1, the robustness scales linearly in the logarithm of the classification error (note that Figure 1 shows accuracy instead). Here, robustness is measured in terms of the necessary distortion of Carlini&Wagner attacks to achieve a misclassification. However, it can also be seen, that the regressed line (red) mainly relies on the better robustness of AlexNet and VGG 16/19 compared to all other networks. Therefore, I find it questionable whether this trade-off generalizes to other tasks or deep learning in general.

Figure 1: $L_2$ pixel distortion of Carlini&Wagner attacks – as indicator for robustness – plotted against the top-1 accuracy on ImageNet for the 18 different architectures listed in the legend.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.