Keeping track of generated images using watermarking.
Conformal calibration with uncertain ground truth.
Evaluating AI models with uncertain ground truth.
PhD thesis on uncertainty estimation and (adversarial) robustness in deep learning.
End-to-end training of deep neural networks and conformal predictors to reduce confidence set size and optimizer application-specific objectives.
Random and adversarial bit error robustness of DNNs for energy-efficient and secure DNN accelerators.
Robust generalization and overfitting linked to flatness of robust loss surface in weight space.
Random and adversarial bit errors in quantized DNN weights.
Confidence calibration of adversarial training for “generalizable” robustness.
Disentangling the relationship between adversarial robustness and generalization.