[N] Pre-trained knowledge graph embedding models are available in GraphVite!
In the recent update of GraphVite, we release a new large-scale knowledge graph dataset, along with new benchmarks of knowledge graph embedding methods. The dataset, Wikidata5m, contains 5 million entities and 21 million facts constructed from Wikidata and Wikipedia. Most of the entities come from the general domain or the scientific domain, such as celebrities, events, concepts and things.
To facilitate the usage of knowledge graph representations in semantic tasks, we provide a bunch of pre-trained embeddings from popular models, including TransE, DistMult, ComplEx, SimplE and RotatE. You can directly access these embeddings by natural language index, such as “machine learning”, “united states” or even abbreviations like “m.i.t.”. Check out these models here.
Here are the benchmarks of these models on Wikidata5m.