Skip to main content

A Python Package for Deep Graph Networks

Project description

PyDGN

Wiki

Description

This is a Python library to easily experiment with Deep Graph Networks (DGNs). It provides automatic management of data splitting, loading and the most common experimental settings. It also handles both model selection and risk assessment procedures, by trying many different configurations in parallel (CPU or GPU). This repository is built upon the Pytorch Geometric Library and Pytorch Geometric Temporal, which provide support for data management.

If you happen to use or modify this code, please remember to cite our tutorial paper:

Bacciu Davide, Errica Federico, Micheli Alessio, Podda Marco: A Gentle Introduction to Deep Learning for Graphs, Neural Networks, 2020. DOI: 10.1016/j.neunet.2020.06.006.

If you are interested in a rigorous evaluation of Deep Graph Networks, check this out:

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

What's New

With PyDGN 0.7.0, we have released the first support for temporal experiments! In particular, we can now tackle node prediction in single graph sequence tasks! We rely on Pytorch Geometric Temporal for the definition of data and models.

Installation:

(We assume git and Miniconda/Anaconda are installed)

Be sure that the variable $LD_LIBRARY_PATH contains :/home/[your user name]/miniconda3/lib. Then run from your terminal the following command:

source setup/install.sh [<your_cuda_version>]
pip install pydgn

Where <your_cuda_version> is an optional argument that can be either cpu, cu102 or cu113 for Pytorch >= 1.10.0 If you do not provide a cuda version, the script will default to cpu. The script will create a virtual environment named pydgn, with all the required packages needed to run our code. Important: do NOT run this command using bash instead of source!

Remember that PyTorch MacOS Binaries dont support CUDA, install from source if CUDA is needed

Usage:

Preprocess your dataset (see also Wiki)

python build_dataset.py --config-file [your data config file]

Exampla

python build_dataset.py --config-file DATA_CONFIGS/config_PROTEINS.yml 

Launch an experiment in debug mode (see also Wiki)

python launch_experiment.py --config-file [your exp. config file] --splits-folder [the splits MAIN folder] --data-splits [the splits file] --data-root [root folder of your data] --dataset-name [name of the dataset] --dataset-class [class that handles the dataset] --max-cpus [max cpu parallelism] --max-gpus [max gpu parallelism] --gpus-per-task [how many gpus to allocate for each job] --final-training-runs [how many final runs when evaluating on test. Results are averaged] --result-folder [folder where to store results]

Example (GPU required)

python launch_experiment.py --config-file MODEL_CONFIGS/config_SupToyDGN_RandomSearch.yml --splits-folder DATA_SPLITS/CHEMICAL/ --data-splits DATA_SPLITS/CHEMICAL/PROTEINS/PROTEINS_outer10_inner1.splits --data-root DATA --dataset-name PROTEINS --dataset-class pydgn.data.dataset.TUDatasetInterface --max-cpus 1 --max-gpus 1 --final-training-runs 1 --result-folder RESULTS/DEBUG

To debug your code it is useful to add --debug to the command above. Notice, however, that the CLI will not work as expected here, as code will be executed sequentially. After debugging, if you need sequential execution, you can use --max-cpus 1 --max-gpus 1 --gpus-per-task [0/1] without the --debug option.

Grid Search 101

Have a look at one of the config files.

Random Search 101

Specify a num_samples in the config file with the number of random trials, replace grid with random, and specify a sampling method for each hyper-parameter. We provide different sampling methods:

  • choice --> pick at random from a list of arguments
  • uniform --> pick uniformly from min and max arguments
  • normal --> sample from normal distribution with mean and std
  • randint --> pick at random from min and max
  • loguniform --> pick following the recprocal distribution from log_min, log_max, with a specified base

There is one config file, namely config_SupToyDGN_RandomSearch.yml, which you can check to see an example.

Data Splits

We provide the data splits taken from

Errica Federico, Podda Marco, Bacciu Davide, Micheli Alessio: A Fair Comparison of Graph Neural Networks for Graph Classification. Proceedings of the 8th International Conference on Learning Representations (ICLR 2020). Code

in the DATA_SPLITS folder.

Credits:

This is a joint project with Marco Podda (Github /Homepage), whom I thank for his relentless dedication.

Many thanks to Antonio Carta (Github /Homepage) for incorporating the Ray library (see v0.4.0) into PyDGN! This will be of tremendous help.

Many thanks to Danilo Numeroso (Github /Homepage) for implementing a very flexible random search! This is a very convenient alternative to grid search.

Many thanks to Alessio Gravina (Github /Homepage) for his invaluable help and expertise regarding the implementation of PyDGN temporal. We still have a lot of work to do!

Contributing

This research software is provided as-is. We are working on this library in our spare time.

If you find a bug, please open an issue to report it, and we will do our best to solve it. For generic/technical questions, please email us rather than opening an issue.

License:

PyDGN is GPL 3.0 licensed, as written in the LICENSE file.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

PyDGN-0.7.3.tar.gz (86.5 kB view details)

Uploaded Source

Built Distribution

PyDGN-0.7.3-py3-none-any.whl (114.6 kB view details)

Uploaded Python 3

File details

Details for the file PyDGN-0.7.3.tar.gz.

File metadata

  • Download URL: PyDGN-0.7.3.tar.gz
  • Upload date:
  • Size: 86.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.10

File hashes

Hashes for PyDGN-0.7.3.tar.gz
Algorithm Hash digest
SHA256 4dd2e6ae281418f0ec876e073b2bf43d7aee376275f1a6bfa4b559152aa80a3a
MD5 3f01cf361b40b4e86aad6c5ff95998b2
BLAKE2b-256 52e4a8c7fedaf4da1d440100c41b23de9ccd45accb3683d4e3037c20f3e9af1c

See more details on using hashes here.

Provenance

File details

Details for the file PyDGN-0.7.3-py3-none-any.whl.

File metadata

  • Download URL: PyDGN-0.7.3-py3-none-any.whl
  • Upload date:
  • Size: 114.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/32.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.62.3 importlib-metadata/4.10.1 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.8.10

File hashes

Hashes for PyDGN-0.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 5b36676d1fc69486895d1cb7a497f954b3b500d3545a0e79aa76afb4998852c5
MD5 302bc6dd5a03269d4774310d8783dd27
BLAKE2b-256 954bfc20f338048c881353623cbd4d25bde3f88698af8daab8b37adebdfb4a70

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page