[R] A General and Adaptive Robust Loss Function
Hi /ml, I presented a paper at CVPR last week that seemed to go over well, so I thought I’d promote it beyond the vision community.
Video (much more approachable than the paper, and identical to the talk): https://www.youtube.com/watch?v=BmNKbnF69eY
Abstract: We present a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continuous parameter, our loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. Interpreting our loss as the negative log of a univariate density yields a general probability distribution that includes normal and Cauchy distributions as special cases. This probabilistic interpretation enables the training of neural networks in which the robustness of the loss automatically adapts itself during training, which improves performance on learning-based tasks such as generative image synthesis and unsupervised monocular depth estimation, without requiring any manual parameter tuning.
TensorFlow Code: https://github.com/google-research/google-research/tree/master/robust_loss
PyTorch Code: https://github.com/jonbarron/robust_loss_pytorch
submitted by /u/jnbrrn