ilpc2022

KG Inductive Link Prediction Challenge (ILPC) 2022

arXiv Zenodo DOI

This repository introduces the ILPC’22 Small and ILPC’22 Large datasets for benchmarking inductive link prediction models and outlines the 2022 incarnation of the Inductive Link Prediction Challenge (ILPC).

🗄️ Datasets

A schematic diagram of inductive link prediction

While in transductive link prediction, the training and inference graph are the same (and therefore contain the same entities), in inductive link prediction, there is a disjoint inference graph that potentially contains new, unseen entities.

For this challenge, we sampled two datasets from Wikidata, the largest publicly available and open KG. Inductive link prediction implies training a model on one graph (denoted as training) and performing inference, eg, validation and test, over a new graph (denoted as inference).

Dataset creation principles:

Both the small and large variants of the dataset can be found in the data folder of this repository. Each contains four splits corresponding to the diagram:

ILPC’22 Small

Split Entities Relations Triples
Train 10,230 96 78,616
Inference 6,653 96 (subset) 20,960
Inference validation 6,653 96 (subset) 2,908
Inference test 6,653 96 (subset) 2,902
Hold-out test set 6,653 96 (subset) 2,894

ILPC’22 Large

Split Entities Relations Triples
Train 46,626 130 202,446
Inference 29,246 130 (subset) 77,044
Inference validation 29,246 130 (subset) 10,179
Inference test 29,246 130 (subset) 10,184
Hold-out test set 29,246 130 (subset) 10,172

🏅 Challenge

The Challenge aims to streamline community efforts in the emerging area of representation learning techniques beyond shallow entity embeddings. We invite submissions proposing new inductive models as well as extending baseline models to achieve higher performance.

We use the following rank-based evaluation metrics:

Making a submission:

  1. Fork the repo
  2. Train your inductive link prediction model
  3. Save the model weights using the --save flag
  4. Upload model weights on GitHub or other platforms (Dropbox, Google Drive, etc)
  5. Open an issue in this repo with the link to your repo, performance metrics, and model weights

🎸 Baselines

We provide an example workflow in main.py for training and evaluating two variants of the NodePiece model using PyKEEN:

The example can be run with python main.py and the options can be listed with python main.py --help.

Installation Instructions Main requirements: * python >= 3.9 * torch >= 1.10 You will need PyKEEN 1.8.0 or newer. ```shell $ pip install pykeen ``` By the time of creation of this repo 1.8.0 is not yet there, but the latest version from sources contains everything we need ```shell $ pip install git+https://github.com/pykeen/pykeen.git ``` If you plan to use GNNs (including the `InductiveNodePieceGNN` baseline) make sure you install [torch-scatter](https://github.com/rusty1s/pytorch_scatter) and [torch-geometric](https://github.com/pyg-team/pytorch_geometric) compatible with your python, torch, and CUDA versions. Running the code on a GPU is strongly recommended.

Baseline Performance on Small Dataset

We report the performance of both variants of the NodePiece model on the small variant of the dataset after running the following:

Model MRR H@100 H@10 H@5 H@3 H@1 AMRI
InductiveNodePieceGNN 0.1326 0.4705 0.2509 0.1899 0.1396 0.0763 0.730
InductiveNodePiece 0.0381 0.4678 0.0917 0.0500 0.0219 0.007 0.666

Baseline Performance on Large Dataset

We report the performance of both variants of the NodePiece model on the large variant of the dataset after running the following:

Model MRR H@100 H@10 H@5 H@3 H@1 AMRI
InductiveNodePieceGNN 0.0705 0.374 0.1458 0.0990 0.0730 0.0319 0.682
InductiveNodePiece 0.0651 0.287 0.1246 0.0809 0.0542 0.0373 0.646

* Note: All models were trained on a single RTX 8000. Average memory consumption during training is about 2 GB VRAM on the small dataset and about 3 GB on large.

👋 Attribution

⚖️ License

The code in this package is licensed under the MIT License. The datasets in this repository are licensed under the Creative Commons Zero license. The trained models and their weights are licensed under the Creative Commons Zero license.

📖 Citation

If you use the ILPC’22 datasets in your work, please cite the following:

@article{Galkin2022,
  archivePrefix = {arXiv},
  arxivId = {2203.01520},
  author = {Galkin, Mikhail and Berrendorf, Max and Hoyt, Charles Tapley},
  eprint = {2203.01520},
  month = {mar},
  title = ,
  url = {http://arxiv.org/abs/2203.01520},
  year = {2022}
}

🎁 Support

This project has been supported by several organizations (in alphabetical order):

🏦 Funding

This project has been funded by the following grants:

Funding Body Program Grant
DARPA Young Faculty Award (PI: Benjamin Gyori) W911NF2010255
German Federal Ministry of Education and Research (BMBF) Munich Center for Machine Learning (MCML) 01IS18036A
Samsung Samsung AI Grant -