[R] Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
We propose an evaluation framework for predictive uncertainty estimation that is specifically designed to test the robustness required in real-world computer vision applications. Using the proposed framework, we perform an extensive comparison of the popular ensembling and MC-dropout methods on the tasks of depth completion and street-scene semantic segmentation. Our comparison suggests that ensembling consistently provides more reliable uncertainty estimates.
Project page: http://www.fregu856.com/publication/evaluating_bdl/