[P] Benchmarking Metric Learning Algorithms the Right Way
I’ve been researching metric learning algorithms for a while now, and in the process I discovered some issues with the field.
You can read about it here: https://medium.com/@tkm45/benchmarking-metric-learning-algorithms-the-right-way-90c073a83968
TL;DR:
- Many papers don’t do apple-to-apple comparisons. They change the network architecture, embedding size, data augmentation, or just use performance-boosting tricks that aren’t mentioned in their paper.
- Most papers don’t use a validation set.
- Two baseline algorithms (triplet and contrastive loss) are actually competitive with the state-of-the-art, but are not presented this way in most papers.
- I’ve made a flexible benchmarking tool that can standardize the way we evaluate metric learning algorithms. You can see it here: https://github.com/KevinMusgrave/powerful_benchmarker
submitted by /u/VanillaCashew
[link] [comments]