Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] No Free Lunch theorems do not compare functions that can utilize cost-information from partial solutions. So why care about NFL?

I see No Free Lunch theorems discussed enough that I decided to check my understanding, and sit down with the original paper.

They prove bold (but contextualized) claims, and I feel like the bold claims have really taken on a life of their own (absent context):

one might expect that hill climbing usually outperforms hill descending if one’s goal is to find a maximum […] such expectations are incorrect

the average performance of any pair of algorithms across all possible problems is identical

Very interesting, to be sure. But this all hinges on a specific assumption:

[…] our decision to only measure distinct [oracle] function evaluations

meaning:

techniques like branch and bound are not included since they rely explicitly on the cost structure of partial solutions.

I think their framework is interesting and useful for describing algorithms like Simulated Annealing or Genetic Algorithms.

But since it doesn’t apply to an entire class of algorithms (those that can reason from partial solutions), it seems to me that we should really reign in our claims about NFL.

I must be missing something.

submitted by /u/BayesMind
[link] [comments]