GOOD: A Graph Out-of-Distribution Benchmark
Project description
GOOD
GOOD: A Graph Out-of-Distribution Benchmark
We are actively building the document.
Overview
GOOD (Graph OOD) is a graph out-of-distribution (OOD) algorithm benchmarking library depending on PyTorch and PyG to make develop and benchmark OOD algorithms easily.
Currently, GOOD contains 8 datasets with 14 domain selections. When combined with covariate, concept, and no shifts, we obtain 42 different splits. We provide performance results on 7 commonly used baseline methods (ERM, IRM, VREx, GroupDRO, Coral, DANN, Mixup) with 10 random runs.
The GOOD dataset summaries are shown in the following figure.
Why GOOD?
Whether you are an experienced researcher of graph out-of-distribution problems or a first-time learner of graph deep learning, here are several reasons to use GOOD as your Graph OOD research, study, and development toolkit.
- Easy-to-use APIs: GOOD provides simple APIs for loading OOD algorithms, graph neural networks, and datasets so that you can take only several lines of code to start.
- Flexibility: Full OOD split generalization code is provided for extensions and any new graph OOD dataset contributions. OOD algorithm base class can be easily overwritten to create new OOD methods.
- Easy-to-extend architecture: In addition to playing as a package, GOOD is also an integrated and well-organized project ready to be further developed.
All algorithms, models, and datasets can be easily registered by
register
and automatically embedded into the designed pipeline like a breeze! The only thing the user needs to do is write your own OOD algorithm class, your own model class, or your new dataset class. Then you can compare your results with the leaderboard. - Easy comparisons with the leaderboard: We provide insightful comparisons from multiple perspectives. Any research and studies can use our leaderboard results for comparison. Note that this is a growing project, so we will include new OOD algorithms gradually. Besides, if you hope to include your algorithms in the leaderboard, please contact us or contribute to this project. A big welcome!
- Reproducibility:
- OOD Datasets: GOOD provides full OOD split generalization code to reproduce or generate new datasets.
- Leaderboard results: One random seed round results are provided, and loaded models pass the test result reproduction.
Installation
Conda dependencies
GOOD depends on PyTorch (>=1.6.0), PyG (>=2.0), and RDKit (>=2020.09.5). For more details: conda environment
Note that we currently test on PyTorch (==1.10.1), PyG (==2.0.3), RDKit (==2020.09.5); thus we strongly encourage to install these versions.
Attention! Due to a known issue, please install PyG through Pip to avoid incompatibility.
Pip (Beta)
Only use modules independently (pending)
pip install graph-ood
Take the advantages of whole project (recommended)
git clone https://github.com/divelab/GOOD.git && cd GOOD
pip install -e .
Quick Tutorial
Module usage
GOOD datasets
There are two ways to import 8 GOOD datasets with 14 domain selections and a total 42 splits, but for simplicity, we only show one of them. Please refer to Tutorial for more details.
# Directly import
from GOOD.data.good_datasets.good_hiv import GOODHIV
hiv_datasets, hiv_meta_info = GOODHIV.load(dataset_root, domain='scaffold', shift='covariate', generate=False)
GOOD GNNs
The best and fair way to compare algorithms with the leaderboard is to use the same and similar graph encoder structure;
therefore, we provide GOOD GNN APIs to support. Here, we use an objectified dictionary config
to pass parameters. More
details about the config: Documents of config
To use exact GNN
from GOOD.networks.models.GCNs import GCN
model = GCN(config)
To only use parts of GNN
from GOOD.networks.models.GINvirtualnode import GINEncoder
encoder = GINEncoder(config)
GOOD algorithms
Try to apply OOD algorithms to your own models?
from GOOD.ood_algorithms.algorithms.VREx import VREx
ood_algorithm = VREx(config)
# Then you can provide it to your model for necessary ood parameters,
# and use its hook-like function to process your input, output, and loss.
Project usage
It is a good beginning to make it work directly. Here, we provide the command line script goodtg
(GOOD to go) to access the main function located at GOOD.kernel.pipeline:main
.
Choosing a config file in configs/GOOD_configs
, we can start a task:
goodtg --config_path GOOD_configs/GOODCMNIST/color/concept/DANN.yaml
Specifically, the task is clearly divided into three steps:
- Config
from GOOD import config_summoner
from GOOD.utils.args import args_parser
from GOOD.utils.logger import load_logger
args = args_parser()
config = config_summoner(args)
load_logger(config)
- Loader
from GOOD.kernel.pipeline import initialize_model_dataset
from GOOD.ood_algorithms.ood_manager import load_ood_alg
model, loader = initialize_model_dataset(config)
ood_algorithm = load_ood_alg(config.ood.ood_alg, config)
- Train/test pipeline
from GOOD.kernel.pipeline import load_task
load_task(config.task, model, loader, ood_algorithm, config)
Please refer to Tutorial for more details.
Reproducibility
For reproducibility, we provide full configurations used to obtain leaderboard results in configs/GOOD_configs.
We further provide two tests: dataset regeneration test and test result check.
Dataset regeneration test
This test regenerates all datasets again and compares them with the datasets used in the original training process locates. Test details can be found at test_regenerate_datasets.py. For a quick review, we provide a full regeneration test report.
Leaderboard results test
This test loads all checkpoints in round 1 and compares their results with saved ones. Test details can be found at test_reproduce_round1.py. For a quick review, we also post our full round1 reproduce report.
These reports are in html
format. Please download them and open them in your browser.: )
Training plots: The training plots for all algorithms in round 1 can be found HERE.
Sampled tests
In order to keep the validity of our code all the time, we link our project with circleci service and provide several sampled tests to go through (because of the limitation of computational resources in CI platforms).
Discussion
Please submit new issues or start a new discussion for any technical or other questions.
Contact
Please feel free to contact Shurui Gui, Xiner Li, or Shuiwang Ji!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file graph-ood-0.1.0.tar.gz
.
File metadata
- Download URL: graph-ood-0.1.0.tar.gz
- Upload date:
- Size: 73.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f03e439b9e2e5a7a4927cf423b57c52b2f84793d3134eb72d71dd5ea042d2026 |
|
MD5 | 41a7dba23cf20a08b62d11373b158962 |
|
BLAKE2b-256 | c8403e02398ab5e952528e2bec7db4fc8599db4cac7aa33305ae1b3dc33b7691 |
File details
Details for the file graph_ood-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: graph_ood-0.1.0-py3-none-any.whl
- Upload date:
- Size: 116.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2a7c308665f7eb21cd7064abe420c0a46986f1d3487b9433296cfa04778a089d |
|
MD5 | 418a8dafc50d4d71bd389ffdb3c85bdc |
|
BLAKE2b-256 | e9baeb6a3ae3a7b67ed379659596b2a7e4c9ce408ee12eb1eb719ef4764661fc |