Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] Benchmarking Metric Learning Algorithms the Right Way

I’ve been researching metric learning algorithms for a while now, and in the process I discovered some issues with the field.

You can read about it here: https://medium.com/@tkm45/benchmarking-metric-learning-algorithms-the-right-way-90c073a83968

TL;DR:

  1. Many papers don’t do apple-to-apple comparisons. They change the network architecture, embedding size, data augmentation, or just use performance-boosting tricks that aren’t mentioned in their paper.
  2. Most papers don’t use a validation set.
  3. Two baseline algorithms (triplet and contrastive loss) are actually competitive with the state-of-the-art, but are not presented this way in most papers.
  4. I’ve made a flexible benchmarking tool that can standardize the way we evaluate metric learning algorithms. You can see it here: https://github.com/KevinMusgrave/powerful_benchmarker

submitted by /u/VanillaCashew
[link] [comments]