[R] Unsupervised Universal Self-Attention Network for Graph Classification
Existing graph neural network-based models often have weaknesses in exploiting potential dependencies among nodes and graph structure properties. To this end, we present U2GNN, a novel embedding model leveraging on the strength of the recently introduced universal self-attention network (Dehghani et al., 2019), to learn low-dimensional embeddings of graphs which can be used for graph classification. In particular, given an input graph, U2GNN first applies a self-attention computation, which is then followed by a recurrent transition to iteratively memorize its attention on vector representations of each node and its neighbors across each iteration. Thus, U2GNN can address the weaknesses in the existing models in order to produce plausible node embeddings whose sum is the final embedding of the whole graph. Experimental results in both supervised and unsupervised training settings show that our U2GNN produces new state-of-the-art performances on a range of well-known benchmark datasets for the graph classification task. To the best of our knowledge, this is the first work showing that a unsupervised model can significantly work better than supervised models by a large margin.
- We consider in this paper a novel strategy of using the unsupervised training setting to train a GNN-based model for the graph classification task where node feature and global information will be incorporated.
- U2GNN can be seen as a general framework where we prove the powerfulness of our model in both the supervised or unsupervised training settings. The experimental results on 9 benchmark datasets show that both our supervised and unsupervised U2GNN models produce new state-of-the-art (SOTA) accuracies in most of benchmark cases.