Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Do you have to cite a dishonest paper?

If you come across a paper that is directly relevant to the problem you’re trying to solve but also makes claims that are unsupported by its own data (and contradict your own results), do you have to cite it? I feel that no credit should be given where none is due, and that debunking the claims is a waste of space.

For reference, this is the paper I’m talking about. The paper investigates adding attention layers on top of the standard CNN used by DQN on Atari. There are 6 variants considered, which differ in the location where the attention layers are added to the CNN.

Claim 1 (figure 2): “Here we can see the clear advantage of self-attention is able to provide with respect to sample efficiency.”

The baseline PPO learns faster or on-par with their 6 variants on 6 of the 10 games benchmarked.

Claim 2 (table 1): “These results clearly demonstrate the improved performance of multiple self-attention variants.”

Their best variant, SAN, performs better on 5 of the 10 games compared to the baseline PPO. Most of the other variants perform worse.

Claim 3 (conclusion): “We can also see clear improvements over baseline results with respect to 60% of environments tested”.

That is to say, the baseline PPO performs best on 4 games, and the proposed method performs best on the 6 other games if you cherry-pick the variant used on a per-game basis.

submitted by /u/Seerdecker
[link] [comments]