A package for training and evaluating multimodal knowledge graph embeddings
Project description
PyKEEN
PyKEEN (Python KnowlEdge EmbeddiNgs) is a Python package designed to train and evaluate knowledge graph embedding models (incorporating multi-modal information).
Installation • Quickstart • Datasets • Models • Support • Citation
Installation
The latest stable version of PyKEEN can be downloaded and installed from PyPI with:
$ pip install pykeen
The latest version of PyKEEN can be installed directly from the source on GitHub with:
pip install git+https://github.com/pykeen/pykeen.git
More information about installation (e.g., development mode, Windows installation, extras) can be found in the installation documentation.
Quickstart
This example shows how to train a model on a dataset and test on another dataset.
The fastest way to get up and running is to use the pipeline function. It provides a high-level entry into the extensible functionality of this package. The following example shows how to train and evaluate the TransE model on the Nations dataset. By default, the training loop uses the stochastic local closed world assumption (sLCWA) training approach and evaluates with rank-based evaluation.
from pykeen.pipeline import pipeline
result = pipeline(
model='TransE',
dataset='nations',
)
The results are returned in an instance of the PipelineResult dataclass that has attributes for the trained model, the training loop, the evaluation, and more. See the tutorials on understanding the evaluation and making novel link predictions.
PyKEEN is extensible such that:
- Each model has the same API, so anything from
pykeen.models
can be dropped in - Each training loop has the same API, so
pykeen.training.LCWATrainingLoop
can be dropped in - Triples factories can be generated by the user with
from pykeen.triples.TriplesFactory
The full documentation can be found at https://pykeen.readthedocs.io.
Implementation
Below are the models, datasets, training modes, evaluators, and metrics implemented
in pykeen
.
Datasets (25)
The citation for each dataset corresponds to either the paper describing the dataset, the first paper published using the dataset with knowledge graph embedding models, or the URL for the dataset if neither of the first two are available.
Models (25)
Losses (7)
Name | Reference | Description |
---|---|---|
bceaftersigmoid | pykeen.losses.BCEAfterSigmoidLoss |
A module for the numerically unstable version of explicit Sigmoid + BCE loss. |
bcewithlogits | pykeen.losses.BCEWithLogitsLoss |
A module for the binary cross entropy loss. |
crossentropy | pykeen.losses.CrossEntropyLoss |
A module for the cross entopy loss that evaluates the cross entropy after softmax output. |
marginranking | pykeen.losses.MarginRankingLoss |
A module for the margin ranking loss. |
mse | pykeen.losses.MSELoss |
A module for the mean square error loss. |
nssa | pykeen.losses.NSSALoss |
An implementation of the self-adversarial negative sampling loss function proposed by [sun2019]_. |
softplus | pykeen.losses.SoftplusLoss |
A module for the softplus loss. |
Regularizers (5)
Name | Reference | Description |
---|---|---|
combined | pykeen.regularizers.CombinedRegularizer |
A convex combination of regularizers. |
lp | pykeen.regularizers.LpRegularizer |
A simple L_p norm based regularizer. |
no | pykeen.regularizers.NoRegularizer |
A regularizer which does not perform any regularization. |
powersum | pykeen.regularizers.PowerSumRegularizer |
A simple x^p based regularizer. |
transh | pykeen.regularizers.TransHRegularizer |
A regularizer for the soft constraints in TransH. |
Optimizers (6)
Name | Reference | Description |
---|---|---|
adadelta | torch.optim.Adadelta |
Implements Adadelta algorithm. |
adagrad | torch.optim.Adagrad |
Implements Adagrad algorithm. |
adam | torch.optim.Adam |
Implements Adam algorithm. |
adamax | torch.optim.Adamax |
Implements Adamax algorithm (a variant of Adam based on infinity norm). |
adamw | torch.optim.AdamW |
Implements AdamW algorithm. |
sgd | torch.optim.SGD |
Implements stochastic gradient descent (optionally with momentum). |
Training Loops (2)
Name | Reference | Description |
---|---|---|
lcwa | pykeen.training.LCWATrainingLoop |
A training loop that uses the local closed world assumption training approach. |
slcwa | pykeen.training.SLCWATrainingLoop |
A training loop that uses the stochastic local closed world assumption training approach. |
Negative Samplers (2)
Name | Reference | Description |
---|---|---|
basic | pykeen.sampling.BasicNegativeSampler |
A basic negative sampler. |
bernoulli | pykeen.sampling.BernoulliNegativeSampler |
An implementation of the Bernoulli negative sampling approach proposed by [wang2014]_. |
Stoppers (2)
Name | Reference | Description |
---|---|---|
early | pykeen.stoppers.EarlyStopper |
A harness for early stopping. |
nop | pykeen.stoppers.NopStopper |
A stopper that does nothing. |
Evaluators (2)
Name | Reference | Description |
---|---|---|
rankbased | pykeen.evaluation.RankBasedEvaluator |
A rank-based evaluator for KGE models. |
sklearn | pykeen.evaluation.SklearnEvaluator |
An evaluator that uses a Scikit-learn metric. |
Metrics (6)
Metric | Description | Evaluator | Reference |
---|---|---|---|
Adjusted Mean Rank | The mean over all chance-adjusted ranks: mean_i (2r_i / (num_entities+1)). Lower is better. | rankbased | pykeen.evaluation.RankBasedMetricResults |
Average Precision Score | The area under the precision-recall curve, between [0.0, 1.0]. Higher is better. | sklearn | pykeen.evaluation.SklearnMetricResults |
Hits At K | The hits at k for different values of k, i.e. the relative frequency of ranks not larger than k. Higher is better. | rankbased | pykeen.evaluation.RankBasedMetricResults |
Mean Rank | The mean over all ranks: mean_i r_i. Lower is better. | rankbased | pykeen.evaluation.RankBasedMetricResults |
Mean Reciprocal Rank | The mean over all reciprocal ranks: mean_i (1/r_i). Higher is better. | rankbased | pykeen.evaluation.RankBasedMetricResults |
Roc Auc Score | The area under the ROC curve between [0.0, 1.0]. Higher is better. | sklearn | pykeen.evaluation.SklearnMetricResults |
Trackers (5)
Name | Reference | Description |
---|---|---|
csv | pykeen.trackers.CSVResultTracker |
Tracking results to a CSV file. |
json | pykeen.trackers.JSONResultTracker |
Tracking results to a JSON lines file. |
mlflow | pykeen.trackers.MLFlowResultTracker |
A tracker for MLflow. |
neptune | pykeen.trackers.NeptuneResultTracker |
A tracker for Neptune.ai. |
wandb | pykeen.trackers.WANDBResultTracker |
A tracker for Weights and Biases. |
Hyper-parameter Optimization
Samplers (3)
Name | Reference | Description |
---|---|---|
grid | optuna.samplers.GridSampler |
Sampler using grid search. |
random | optuna.samplers.RandomSampler |
Sampler using random sampling. |
tpe | optuna.samplers.TPESampler |
Sampler using TPE (Tree-structured Parzen Estimator) algorithm. |
Any sampler class extending the optuna.samplers.BaseSampler, such as their sampler implementing the CMA-ES algorithm, can also be used.
Experimentation
Reproduction
PyKEEN includes a set of curated experimental settings for reproducing past landmark experiments. They can be accessed and run like:
pykeen experiments reproduce tucker balazevic2019 fb15k
Where the three arguments are the model name, the reference, and the dataset.
The output directory can be optionally set with -d
.
Ablation
PyKEEN includes the ability to specify ablation studies using the hyper-parameter optimization module. They can be run like:
pykeen experiments ablation ~/path/to/config.json
Large-scale Reproducibility and Benchmarking Study
We used PyKEEN to perform a large-scale reproducibility and benchmarking study which are described in our article:
@article{ali2020benchmarking,
title={Bringing Light Into the Dark: A Large-scale Evaluation of Knowledge Graph Embedding Models Under a Unified Framework},
author={Ali, Mehdi and Berrendorf, Max and Hoyt, Charles Tapley and Vermue, Laurent and Galkin, Mikhail and Sharifzadeh, Sahand and Fischer, Asja and Tresp, Volker and Lehmann, Jens},
journal={arXiv preprint arXiv:2006.13365},
year={2020}
}
We have made all code, experimental configurations, results, and analyses that lead to our interpretations available at https://github.com/pykeen/benchmarking.
Contributing
Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.md for more information on getting involved.
Acknowledgements
Supporters
This project has been supported by several organizations (in alphabetical order):
- Bayer
- CoronaWhy
- Enveda Biosciences
- Fraunhofer Institute for Algorithms and Scientific Computing
- Fraunhofer Institute for Intelligent Analysis and Information Systems
- Fraunhofer Center for Machine Learning
- Ludwig-Maximilians-Universität München
- Munich Center for Machine Learning (MCML)
- Siemens
- Smart Data Analytics Research Group (University of Bonn & Fraunhofer IAIS)
- Technical University of Denmark - DTU Compute - Section for Cognitive Systems
- Technical University of Denmark - DTU Compute - Section for Statistics and Data Analysis
- University of Bonn
Logo
The PyKEEN logo was designed by Carina Steinborn.
Citation
If you have found PyKEEN useful in your work, please consider citing our article:
@article{ali2020pykeen,
title={PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Emebddings},
author={Ali, Mehdi and Berrendorf, Max and Hoyt, Charles Tapley and Vermue, Laurent and Sharifzadeh, Sahand and Tresp, Volker and Lehmann, Jens},
journal={arXiv preprint arXiv:2007.14175},
year={2020}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file pykeen-1.4.0.tar.gz
.
File metadata
- Download URL: pykeen-1.4.0.tar.gz
- Upload date:
- Size: 1.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/51.3.3 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.8.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8d3386c0dd8155a49dd9c37a51645135b367ea9368402911b958a91d00269cad |
|
MD5 | 19f56de644257bac9cf9861343247f09 |
|
BLAKE2b-256 | e198f5deffcb7253b2c5b4f4decfddf0a020138b4fe8ce4ae76d566cd9162522 |
File details
Details for the file pykeen-1.4.0-py3-none-any.whl
.
File metadata
- Download URL: pykeen-1.4.0-py3-none-any.whl
- Upload date:
- Size: 425.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/51.3.3 requests-toolbelt/0.9.1 tqdm/4.56.2 CPython/3.8.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9be33a7201b04f53ea61d63ad6c10e5d3963524ca842cfe4b047ace227235b06 |
|
MD5 | bc1985270a3eb744aceaae3fcdb119fb |
|
BLAKE2b-256 | cb5803b650ef4457f348c7c018b9350b04932550463a36c22f1ed016f7de3036 |