IAM

TAG»PYTHON«

ARTICLE

Benchmarking Bit Errors in Quantized Neural Networks with PyTorch

Similar to my article series on adversarial robustness, I was planning to have a series on bit errors robustness accompanied by PyTorch code. Instead, due to time constraints, I decided to condense the information into a single article. The code for the originally planned six articles is available on GitHub.

More ...

ARTICLE

Simple Adversarial Transformations in PyTorch

Another alternative to the regular Lp-constrained adversarial examples that is additionally less visible than adversarial patches or frames are adversarial transformations such as small crops, rotations and translations. Similar to Lp adversarial examples, adversarial transformations are often less visible unless the original image is available for direct comparison. In this article, I will include a PyTorch implementation and some results against adversarial training.

More ...

ARTICLE

Adversarial Patches and Frames in PyTorch

Adversarial patches and frames are an alternative to the regular $L_p$-constrained adversarial examples. Often, adversarial patches are thought to be more realistic — mirroring graffitis or stickers in the real world. In this article I want to discuss a simple PyTorch implementation and present some results of adversarial patches against adversarial training as well as confidence-calibrated adversarial training.

More ...

ARTICLE

Distal Adversarial Examples Against Neural Networks in PyTorch

Out-of-distribution examples are images that are cearly irrelevant to the task at hand. Unfortunately, deep neural networks frequently assign random labels with high confidence to such examples. In this article, I want to discuss an adversarial way of computing high-confidence out-of-distribution examples, so-called distal adversarial examples, and how confidence-calibrated adversarial training handles them.

More ...

ARTICLE

Proper Robustness Evaluation of Confidence-Calibrated Adversarial Training in PyTorch

Properly evaluating defenses against adversarial examples has been difficult as adversarial attacks need to be adapted to each individual defense. This also holds for confidence-calibrated adversarial training, where robustness is obtained by rejecting adversarial examples based on their confidence. Thus, regular robustness metrics and attacks are not easily applicable. In this article, I want to discuss how to evaluate confidence-calibrated adversarial training in terms of metrics and attacks.

More ...

ARTICLE

Generalizing Adversarial Robustness with Confidence-Calibrated Adversarial Training in PyTorch

Taking adversarial training from this previous article as baseline, this article introduces a new, confidence-calibrated variant of adversarial training that addresses two significant flaws: First, trained with L adversarial examples, adversarial training is not robust against L2 ones. Second, it incurs a significant increase in (clean) test error. Confidence-calibrated adversarial training addresses these problems by encouraging lower confidence on adversarial examples and subsequently rejecting them.

More ...

JUNE2023

PROJECT

OPEN SOURCE Bit Error Robustness in PyTorch Article Series I was planning to have an article series on bit error robustness in deep learning — similar to my article series on adversarial robustness — with accompanying PyTorch code. However, the recent progress in machine learning made me focus on other projects. Nevertheless, the articles should […]

More ...

JUNE2023

PROJECT

Series of articles discussing adversarial robustness and adversarial training in PyTorch.

More ...