Skip to main content

A package for training and evaluating knowledge graph embeddings

Project description

PyKEEN (Python KnowlEdge EmbeddiNgs) is a package for training and evaluating knowledge graph embeddings. Currently, it provides implementations of 10 knowledge graph emebddings models, and can be run in training mode in which users provide their own set of hyper-parameter values, or in hyper-parameter optimization mode to find suitable hyper-parameter values from set of user defined values. PyKEEN can also be run without having experience in programing by using its interactive command line interface that can be started with the command pykeen from a terminal.

News

We are currently working on PyKEEN 0.1.0 which will provide additional features such as several negative sampling approaches and further evaluation metrics. Furthermore, we are integrating additional KGE models.

We are developing a new package for training and evaluating multimodal KGE models which will be later integrated into PyKEEN.

Citation

If you find PyKEEN useful in your work, please consider citing:

Share Your Experimental Artifacts

You can share you trained KGE models along the other experimental artifacts through the KEEN Model Zoo.

Installation Current version on PyPI Supported Python Versions: 3.6 and 3.7 MIT License

To install pykeen, Python 3.6+ is required, and we recommend to install it on Linux or Mac OS systems. Please run following command:

pip install pykeen

Alternatively, it can be installed from the source for development with:

$ git clone https://github.com/SmartDataAnalytics/PyKEEN.git pykeen
$ cd pykeen
$ pip install -e .

However, GPU acceleration is limited to Linux systems with the appropriate graphics cards as described in the PyTorch documentation.

Installing Extras with Pip

PyKEEN uses pip’s extras functionality to allow some non-essential features to be skipped. They can be installed with the following:

  1. pip install pykeen[ndex] enables support for loading networks from NDEx. They can be added to the training file paths by prefixing files with ndex:

Contributing

Contributions, whether filing an issue, making a pull request, or forking, are appreciated. See CONTRIBUTING.rst for more information on getting involved.

Tutorials

Code examples can be found in the notebooks directory.

Further tutorials are available in our documentation.

CLI Usage - Set Up Your Experiment within 60 seconds

To start the PyKEEN CLI, run the following command:

pykeen

then the command line interface will assist you to configure your experiments.

To start PyKEEN with an existing configuration file, run:

pykeen -c /path/to/config.json

then the command line interface won’t be called, instead the pipeline will be started immediately.

Starting the Prediction Pipeline

To make prediction based on a trained model, run:

pykeen-predict -m /path/to/model/directory -d /path/to/data/directory

where the value for the argument -m is the directory containing the model, in more detail following files must be contained in the directory:

  • configuration.json

  • entities_to_embeddings.json

  • relations_to_embeddings.json

  • trained_model.pkl

These files are automatically created after model is trained (and evaluated) and exported in your specified output directory.

The value for the argument -d is the directory containing the data for which inference should be applied, and it needs to contain following files:

  • entities.tsv

  • relations.tsv

where entities.tsv contains all entities of interest, and relations.tsv all relations. Both files should contain a single column containing all the entities/relations. Based on these files, PyKEEN will create all triple permutations, and computes the predictions for them, and saves them in data directory in predictions.tsv. Note: the model- and the data-directory can be the same directory as long as all required files are provided.

Optionally, a set of triples can be provided that should be exluded from the prediction, e.g. all the triples contained in the training set:

pykeen-predict -m /path/to/model/directory -d /path/to/data/directory -t /path/to/triples.tsv

Hence, it is easily possible to compute plausibility scores for all triples that are not contained in the training set.

Summarize the Results of All Experiments

To summarize the results of all experiments, please provide the path to parent directory containing all the experiments as sub-directories, and the path to the output file:

pykeen-summarize -d /path/to/experiments/directory -o /path/to/output/file.csv

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pykeen-0.0.26.tar.gz (562.0 kB view details)

Uploaded Source

Built Distribution

pykeen-0.0.26-py36-none-any.whl (76.1 kB view details)

Uploaded Python 3.6

File details

Details for the file pykeen-0.0.26.tar.gz.

File metadata

  • Download URL: pykeen-0.0.26.tar.gz
  • Upload date:
  • Size: 562.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.33.0 CPython/3.7.3

File hashes

Hashes for pykeen-0.0.26.tar.gz
Algorithm Hash digest
SHA256 6ca79a72fad85ec328bd8aee0a45bcd047494ce7be0fc8d88518541f68c71ba9
MD5 102a758f0082465408c82d685d5a2453
BLAKE2b-256 6607b4e1c9aacb60d30037fe58b08865eee0391b216cfbba1aaba4e21006762b

See more details on using hashes here.

File details

Details for the file pykeen-0.0.26-py36-none-any.whl.

File metadata

  • Download URL: pykeen-0.0.26-py36-none-any.whl
  • Upload date:
  • Size: 76.1 kB
  • Tags: Python 3.6
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.33.0 CPython/3.7.3

File hashes

Hashes for pykeen-0.0.26-py36-none-any.whl
Algorithm Hash digest
SHA256 f000638710cf9d05773f6a99ae6f2818956c5be96928a28ad1ff66ce18bdee76
MD5 fff57c5d4576a717f211505455f33cfa
BLAKE2b-256 f0b915a21f9cc4d355e5de0c42547486ab31aa3f97c46fa655199d12dfa4d126

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page