Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Discussion] How are papers validated if they lack the code & weights ?

I’m not part of academia, so maybe this question is quite silly, but I’d love to know the answer.

I’ve noticed a trend of ML papers containing no source code for creating the models, training on the dataset and no weights for the actual trained model on which the experiments were ran.

How would one go about validating the research in these papers with the lacking source code ?

For one re-constructing the model exactly as the original in some cases could be hard, since the paper might only mention the generic architecture for some block, but not the exact variation they went for.

Secondly, it seems it would be a rather tedious task to reverse engineer the model and the training code just to validate simple things about what the researchers did (e.g. that they didn’t make some error in the sampling of train/test/validation data and as such got the wrong numbers in the paper for a specific dataset)

Do people just email the authors and ask for the source code ? Thus keeping it constrained within the specific academic circles the author desire ? If so, why make the paper open to being with, since for a lot of them a pretty good replica of the model can be constructed from the paper alone.

submitted by /u/elcric_krej
[link] [comments]