DeltaUQ is a simple strategy that can be used to train a single deep neural network (DNN) model to produce meaningful uncertainties. The code accompanies the paper “Single Model Uncertainty Estimation via Stochastic Data Centering” from the 2022 Conference and Workshop on Neural Information Processing Systems (NeurIPS). The abstract follows:
We are interested in estimating the uncertainties of deep neural networks, which play an important role in many scientific and engineering problems. In this paper, we present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models, where the differences in predictions are a strong indicator of epistemic uncertainties. Using the neural tangent kernel (NTK), we demonstrate that this phenomena occurs in part because the NTK is not shift-invariant. Since this is achieved via a trivial input transformation, we show that this behavior can therefore be approximated by training a single neural network – using a technique that we call ∆−UQ – that estimates uncertainty around prediction by marginalizing out the effect of the biases during inference. We show that ∆−UQ ’s uncertainty estimates are superior to many of the current methods on a variety of benchmarks– outlier rejection, calibration under distribution shift, and sequential design optimization of black box functions.