Athalye and Carlini present experiments showing that pixel deflection  and high-level guided denoiser  are ineffective as defense against adversarial examples. In particular, they show that these defenses are not effective against the (currently) strongest first-order attack, projected gradient descent. Here, they also comment on the right threat model to use and explicitly state that the attacker would know the employed defense – which intuitively makes much sense when evaluating defenses.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: