IAM

AUGUST2019

READING

Yujie Ji, Xinyang Zhang, Shouling Ji, Xiapu Luo, Ting Wang. Model-Reuse Attacks on Deep Learning Systems. CoRR abs/1812.00483 (2018).

Ji et al. propose a model-reuse, or trojaning, attack against neural networks by deliberately manipulating specific weights. In particular, given a specific input, the attacker intends to manipulate the model into mis-classifying this input. This is achieved by first generating semantic neighbors of the input, e.g. through transformations or noise, and then identifying salient features for these inputs. These features are correlated to the classifiers output, i.e. some of them have positive impact on classification, some of them have negative impact. The model is fine-tuned by actively adapting the identifying features until the target input is mis-classified.

Also find this summary on ShortScience.org.
What is your opinion on this article? Let me know your thoughts on Twitter @davidstutz92 or LinkedIn in/davidstutz92.