[D] No Free Lunch theorems do not compare functions that can utilize cost-information from partial solutions. So why care about NFL?
I see No Free Lunch theorems discussed enough that I decided to check my understanding, and sit down with the original paper.
They prove bold (but contextualized) claims, and I feel like the bold claims have really taken on a life of their own (absent context):
one might expect that hill climbing usually outperforms hill descending if one’s goal is to find a maximum […] such expectations are incorrect
the average performance of any pair of algorithms across all possible problems is identical
Very interesting, to be sure. But this all hinges on a specific assumption:
[…] our decision to only measure distinct [oracle] function evaluations
meaning:
techniques like branch and bound are not included since they rely explicitly on the cost structure of partial solutions.
I think their framework is interesting and useful for describing algorithms like Simulated Annealing or Genetic Algorithms.
But since it doesn’t apply to an entire class of algorithms (those that can reason from partial solutions), it seems to me that we should really reign in our claims about NFL.
I must be missing something.
submitted by /u/BayesMind
[link] [comments]