Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
Graph classification has been popular recently, which led to rich development of Graph Kernels and Graph Neural Networks. All papers more or less verify the results on 10-15 benchmark data sets. We found that these data sets (and 40 others) have a lot of isomorphic graphs which leads to (1) train-to-test leakage and (2) incorrect validation comparison. Absurdly, some isomorphic graphs have different classification labels, making it impossible to classify correctly such instances. We explain the reasons why these isomorphic instances appear in data sets in the first place (e.g. meta-data, sizes of graphs, or origin of a data set) and open-source new clean data sets, both in GitHub and in PyTorch-Geometric.
Here is a link to the paper: https://arxiv.org/abs/1910.12091
Here is more informal blog post about findings.
submitted by /u/nd7141
[link] [comments]