The code for my MLSys’21 paper on bit error robustness of deep neural networks has been released on GitHub. The repository includes various fixed-point quantization schemes, routines for quantization-aware and random bit error training, and utilities for bit manipulation and operations for PyTorch tensors.
PyTorch, alongside TensorFlow, has become standard among deep learning researchers and practitioners. While PyTorch provides a large variety in terms of tensor operations or deep learning layers, some specialized operations still need to be implemented manually. In cases where runtime is crucial, this should be done in C or CUDA for supporting both CPU and GPU computation. In this article, I want to provide a simple example and framework for extending PyTorch with custom C and CUDA operations using CFFI for Python and CuPy.