Skip to main content

Graphium: Scaling molecular GNNs to infinity.

Project description

Scaling molecular GNNs to infinity


Run on Gradient PyPI Conda PyPI - Downloads Conda license GitHub Repo stars GitHub Repo stars test test-ipu release code-check doc codecov hydra

A deep learning library focused on graph representation learning for real-world chemical tasks.

  • ✅ State-of-the-art GNN architectures.
  • 🐍 Extensible API: build your own GNN model and train it with ease.
  • ⚗️ Rich featurization: powerful and flexible built-in molecular featurization.
  • 🧠 Pretrained models: for fast and easy inference or transfer learning.
  • ⮔ Read-to-use training loop based on Pytorch Lightning.
  • 🔌 Have a new dataset? Graphium provides a simple plug-and-play interface. Change the path, the name of the columns to predict, the atomic featurization, and you’re ready to play!

Documentation

Visit https://graphium-docs.datamol.io/.

Run on Gradient

You can try running Graphium on Graphcore IPUs for free on Gradient by clicking on the button above.

Installation for developers

For CPU and GPU developers

Use mamba, a faster and better alternative to conda.

If you are using a GPU, we recommend enforcing the CUDA version that you need with CONDA_OVERRIDE_CUDA=XX.X.

# Install Graphium's dependencies in a new environment named `graphium`
mamba env create -f env.yml -n graphium

# To force the CUDA version to 11.2, or any other version you prefer, use the following command:
# CONDA_OVERRIDE_CUDA=11.2 mamba env create -f env.yml -n graphium

# Install Graphium in dev mode
mamba activate graphium
pip install --no-deps -e .

For IPU developers

# Install Graphcore's SDK and Graphium dependencies in a new environment called `.graphium_ipu`
./install_ipu.sh .graphium_ipu

The above step needs to be done once. After that, enable the SDK and the environment as follows:

source enable_ipu.sh .graphium_ipu

Training a model

To learn how to train a model, we invite you to look at the documentation, or the jupyter notebooks available here.

If you are not familiar with PyTorch or PyTorch-Lightning, we highly recommend going through their tutorial first.

Running an experiment

We have setup Graphium with hydra for managing config files. To run an experiment go to the expts/ folder. For example, to benchmark a GCN on the ToyMix dataset run

graphium-train dataset=toymix model=gcn

To change parameters specific to this experiment like switching from fp16 to fp32 precision, you can either override them directly in the CLI via

graphium-train dataset=toymix model=gcn trainer.trainer.precision=32

or change them permanently in the dedicated experiment config under expts/hydra-configs/toymix_gcn.yaml. Integrating hydra also allows you to quickly switch between accelerators. E.g., running

graphium-train dataset=toymix model=gcn accelerator=gpu

automatically selects the correct configs to run the experiment on GPU. Finally, you can also run a fine-tuning loop:

graphium-train +finetuning=admet

To use a config file you built from scratch you can run

graphium-train --config-path [PATH] --config-name [CONFIG]

Thanks to the modular nature of hydra you can reuse many of our config settings for your own experiments with Graphium.

Preparing the data in advance

The data preparation including the featurization (e.g., of molecules from smiles to pyg-compatible format) is embedded in the pipeline and will be performed when executing graphium-train [...].

However, when working with larger datasets, it is recommended to perform data preparation in advance using a machine with sufficient allocated memory (e.g., ~400GB in the case of LargeMix). Preparing data in advance is also beneficial when running lots of concurrent jobs with identical molecular featurization, so that resources aren't wasted and processes don't conflict reading/writing in the same directory.

The following command-line will prepare the data and cache it, then use it to train a model.

# First prepare the data and cache it in `path_to_cached_data`
graphium data prepare ++datamodule.args.processed_graph_data_path=[path_to_cached_data]

# Then train the model on the prepared data
graphium-train [...] datamodule.args.processed_graph_data_path=[path_to_cached_data]

Note that datamodule.args.processed_graph_data_path can also be specified at expts/hydra_configs/.

Note that, every time the configs of datamodule.args.featurization changes, you will need to run a new data preparation, which will automatically be saved in a separate directory that uses a hash unique to the configs.

License

Under the Apache-2.0 license. See LICENSE.

Documentation

  • Diagram for data processing in Graphium.
Data Processing Chart
  • Diagram for Muti-task network in Graphium
Full Graph Multi-task Network

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

graphium-2.4.4.tar.gz (5.2 MB view details)

Uploaded Source

Built Distribution

graphium-2.4.4-py3-none-any.whl (1.1 MB view details)

Uploaded Python 3

File details

Details for the file graphium-2.4.4.tar.gz.

File metadata

  • Download URL: graphium-2.4.4.tar.gz
  • Upload date:
  • Size: 5.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for graphium-2.4.4.tar.gz
Algorithm Hash digest
SHA256 d1e272865b92b18616cc9b3a1f05cd51c3356f307510ae72084abd1f4fa07bc6
MD5 60d42195af066a9281a43edbd55ed171
BLAKE2b-256 b20cb8bca80cc6d20f00c788fe06dfa5fd1ef0306b2518b73cd0ac501a7399bb

See more details on using hashes here.

File details

Details for the file graphium-2.4.4-py3-none-any.whl.

File metadata

  • Download URL: graphium-2.4.4-py3-none-any.whl
  • Upload date:
  • Size: 1.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for graphium-2.4.4-py3-none-any.whl
Algorithm Hash digest
SHA256 46e850bf2dd5f03f2a1a755b188fd39b86bca7163d1767a67f97dba0fd915cf6
MD5 e58d4c2c5e23e62b6dd41e7ab8626acd
BLAKE2b-256 3497375955a28d964e186e36d8988fb551a36ede849bc2827be56ee355cc42d5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page