Buckman et al. introduce thermometer encodings, a discretization scheme for improving the robustness of neural networks against adversarial examples. The computation of a thermometer discretization is illustrated in Table 1, where it is compared to the well-known one-hot encoding (which, however, does not preserve distances well). The basic idea of Buckman et al. is to apply the thermometer encoding (i.e., the discretization) before feeding the input to the network. Additionally, they introduce two novel, discrete attacks for challenging their defense mechanism – I refer to the paper for details. They show experimentally, that this discretization (also together with adversarial examples) improves robustness.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below or get in touch with me: