Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Explaining your ICLR reviews

Since the respective review threads are always flooded with people talking about their reviews, I thought I should post some stats on various reviews. These are my guesses about how you should interpret your reviews (based off of previous year’s trends). Note that the quantization of reviews from a 1-10 scale to a [1,3,6,8] scale may make historical data less predictive, especially in the presence of rebuttals.

So what do your reviews mean? First, know that the acceptance rate at ICLR has historically been around 30%.

[3,3,3] or below: Unfortunately, you are in the bottom third of papers. Typically, there will only be a handful of papers that the AC’s will rescue from this category. If you can convince 2 of your reviewers to bump your ratings up, you have a good shot.

[3,3,6]: Your ratings aren’t ideal, but you still have a good shot of being accepted. This year, 20% of papers got this rating (top 40-60 percentile). You’re one good rebuttal away from having a good shot of being accepted. If you don’t succeed in raising your scores, only about 10% of the papers in this range have been accepted, historically speaking.

[3,6,6]: Congrats on the good ratings! Although you certainly aren’t guaranteed acceptance, you definitely have a solid shot of acceptance. This year, [3,6,6] made up the 20-30 percentile of reviews. The results for papers in this range are the opposite of the 40-60 percentile range – only about 15% of papers in this range get rejected.

[6,6,6] or above: Congratulations on the likely acceptance! This year, you are in the top 10% of papers. Usually, you could probably sit back and relax – papers in the top 10% of ratings are nearly never rejected. However, the flatness of the ratings this year makes that a much riskier endeavour.

Now, let’s talk about rebuttals. Unfortunately, rebuttals often don’t affect the reviewers’ ratings as much as you’d like. Historically, across all papers, only ~30% of papers have any reviewers update their scores following the rebuttal. Among borderline papers (ie: papers in the 30-60 percentiles), however, about 50% of papers will have reviewers update the scores.

Overall, what this means that roughly a quarter of the papers in the “borderline” category will move into “accepted”. Given that the most common rating delta after rebuttals was +1, I don’t know how reviewers will behave with the new quantized scores.

Good luck on your rebuttals, and remember that the reviewing system has an immense amount of random noise. This year, 47% of reviews were done by reviewers who said that they had not published in the area they’re reviewing. Previously, NeurIPS has done a study showing that when the same papers were given to 2 separate AC’s, 57% of the papers that one AC accepted were rejected by the other AC. See http://blog.mrtz.org/2014/12/15/the-nips-experiment.html for more reading.

submitted by /u/programmerChilli
[link] [comments]