[D] “Reproducibility is the wrong objective for Machine Learning. Reproducibility is key for science, but ML is not a science.”
Quote of this post can be found here:
The tweet is referring to this article:
“This AI researcher is trying to ward off a reproducibility crisis”
To give some context: Joelle Pineau’s group authored a paper called “Deep Reinforcement Learning that Matters” (https://arxiv.org/abs/1709.06560) which performed a meta-analysis of how reproducible papers in the deep RL community were. They found that reproducing results were surprisingly difficult and the methods these papers used to report results were questionable.
I personally think Joelle handled the topic very diplomatically and in her talk made efforts not to isolate any individual or group. Despite her efforts, the work of one or two groups did stand out.
I thought I would start a discussion on the merits of reproducibility and reflect on how important it really is. If it is important, what makes it important? Is it over emphasized over the bigger picture? What should we strive for in the future?