Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Discussion] A Questionable SIGIR 2019 Paper

I recently read the paper “Adversarial Training for Review-Based Recommendations” published on the SIGIR 2019 conference. I noticed that this paper is almost exactly the same as the paper “Why I like it: Multi-task Learning for Recommendation and Explanation” published on the RecSys 2018 conference.

At first, I thought it is just a coincidence. It is likely for researchers to have similar ideas. Therefore it is possible that two research groups independently working on the same problem come up with the same solution. However, after thoroughly reading and comparing the two papers, now I believe that the SIGIR 2019 paper is plagiarizing the RecSys 2018 paper.

The model proposed in the SIGIR 2019 paper is almost a replicate of the model in the RecSys 2018 paper. (1) Both papers used an adversarial sequence-to-sequence learning model on top of the matrix factorization framework. (2) For the generator and discriminator part, both papers use GRU for generator and CNN for discriminator. (3) The optimization methodology is the same, i.e. alternating optimization between two parts. (4) The evaluations are the same, i.e. evaluating MSE for recommendation performance and evaluating the accuracy for discriminator to show that the generator has learned to generate relevant reviews. (5) The notations and also the formulas that have been used by the two papers look extremely similar.

While ideas can be similar given that adversarial training has been prevalent in the literature for a while, it is suspicious for the SIGIR 2019 paper to have large amount of text overlaps with the RecSys 2018 paper.

Consider the following two sentences:

(1) “The Deep Cooperative Neural Network (DeepCoNN) model user-item interactions based on review texts by utilizing a factorization machine model on top of two convolutional neural networks.” in Section 1 of the SIGIR 2019 paper.

(2) “Deep Cooperative Neural Network (DeepCoNN) model user-item interactions based on review texts by utilizing a factorization machine model on top of two convolutional neural networks.” in Section 2 of the RecSys 2018 paper.

I think this is the most obvious sign of plagiarism. If you search Google for this sentence using “exact match”, you will find that this sentence is only used by these two papers. It is hard to believe that the authors of the SIGIR 2019 paper could come up with the exact same sentence without reading the RecSys 2018 paper.

As another example:

(1) “The decoder employs a single GRU that iteratively produces reviews word by word. In particular, at time step $t$ the GRU first maps the output representation $z_{ut-1}$ of the previous time step into a $k$-dimensional vector $y_{ut-1}$ and concatenates it with $bar{U_{u}}$ to generate a new vector $y_{ut}$. Finally, $y_{ut}$ is fed to the GRU to obtain the hidden representation $h_{t}$, and then $h_{t}$ is multiplied by an output projection matrix and passed through a softmax over all the words in the vocabulary of the document to represent the probability of each word. The output word $z_{ut}$ at time step $t$ is sampled from the multinomial distribution given by the softmax.” in Section 2.1 of the SIGIR 2019 paper.

(2) “The user review decoder utilizes a single decoder GRU that iteratively generates reviews word by word. At time step $t$, the decoder GRU first embeds the output word $y_{i, t-1}$ at the previous time step into the corresponding word vector $x_{i, t-1} in mathcal{R}^{k}$, and then concatenate it with the user textual feature vector $widetilde{U_{i}}$. The concatenated vector is provided as input into the decoder GRU to obtain the hidden activation $h_{t}$. Then the hidden activation is multiplied by an output projection matrix and passed through a softmax over all the words in the vocabulary to represent the probability of each word given the current context. The output word $y_{i, t}$ at time step $t$ is sampled from the multinomial distribution given by the softmax.” in Section 3.1.1 of the RecSys 2018 paper.

In this example, the authors of the SIGIR 2019 paper has replaced some of the phrases in the writing so that the two texts are not exactly the same. However, I believe the similarity of the two texts still shows that the authors of the SIGIR 2019 paper must have read the RecSys 2018 paper before writing their own paper.

I do not intend to go through all the text overlaps between the two papers, but let us see a final example:

(1) “Each word of the review $r$ is mapped to the corresponding word vector, which is then concatenated with a user-specific vector. Notice that the user-specific vectors are learned together with the parameters of the discriminator $D_{theta}$ in the adversarial training of Section 2.3. The concatenated vector representations are then processed by a convolutional layer, followed by a max-pooling layer and a fully-connected projection layer. The final output of the CNN is a sigmoid function which normalizes the probability into the interval of $[0, 1]$”, expressing the probability that the candidate review $r$ is written by user $u$.” in Section 2.2 of the SIGIR 2019 paper.

(2) “To begin with, each word in the review is mapped to the corresponding word vector, which is then concatenated with a user-specific vector that identifies user information. The user-specific vectors are learned together with other parameters during training. The concatenated vector representations are then processed by a convolutional layer, followed by a max-pooling layer and a fully-connected layer. The final output unit is a sigmoid non-linearity, which squashes the probability into the $[0, 1]$ interval.” in Section 3.1.2 of the RecSys 2018 paper.

There is one sentence (“The concatenated vector representations are …… a fully-connected projection layer.”) that is exactly the same in the two papers. Also, I think concatenating the user-specific vectors to every word vector in the review is a very unintuitive idea. I do not think ideas from different research groups can be the same in that granularity of detail. If I were the authors, I will just concatenate the user-specific vectors to the layer before the final projection layer, as it saves computational cost and should lead to better generalization.

As a newbie in information retrieval, I am not sure if such case should be considered as plagiarism. However, as my professor told me that the SIGIR conference is the premier conference in the IR community, I believe that this paper definitely should not be published at a top conference such as SIGIR.

What makes me feel worse is that the two authors of this paper, Dimitrios Rafailidis from Maastricht University, Maastricht, Netherlands and Fabio Crestani from Università della Svizzera italiana (USI), Lugano, Switzerland, are both professors. They should be aware that plagiarism is a big deal in academia.

The link to the papers are https://dl.acm.org/citation.cfm?id=3331313 and https://dl.acm.org/citation.cfm?id=3240365

submitted by /u/joyyeki
[link] [comments]