Skip to main content

Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis

Project description

Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis
J. Berrevoets, A. M. Alaa, Z. Qian, J. Jordon, A. E. S. Gimson, M. van der Schaar [ICML 2021]

organsync arXiv License: MIT

image

In this repository we provide code for our ICML21 paper introducing OrganSync, a novel organ-to-patient allocation system. Note that this code is used for research purposes and is not intented for use in practice.

In our paper we benchmark against OrganITE a previously introduced paper of ours. We have reimplemented OrganITE (as well as other benchmarks) using the same frameworks in this repository, such that all code is comparable throughout. For all previous implementations, we refer to OrganITE's dedicated repository.

Code author: J. Berrevoets (jb2384@cam.ac.uk)

Repository structure

This repository is organised as follows:

organsync/
    |- src/
        |- organsync/                       # Python library core
            |- data/                        # code to preprocess data
            |- eval_policies/               # code to run allocation simulations
            |- models/                      # code for inference models
    |- experiments/
        |- data                             # data modules
        |- models                           # training logic for models
        |- notebooks/wandb
            |- simulation_tests.ipynb       # experiments in Tab.1
            |- a_composition                # experiments in Fig.3
            |- sc_influence.ipynb           # experiments in Fig.4, top row
            |- rep_influence.ipynb          # experiments in Fig.4, bottom row
    |- test                                 # unit tests
    |- data                                 # datasets

Installing

We have provided a requirements.txt file:

pip install -r requirements.txt
pip install .

Please use the above in a newly created virtual environment to avoid clashing dependencies. All code was written for python 3.8.6.

Available Models

Model Paper Code
Organsync Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis Code
OrganITE OrganITE: Optimal transplant donor organ offering using an individual treatment effect Code
TransplantBenefit Policies and guidance Code
MELD A model to predict poor survival in patients undergoing transjugular intrahepatic portosystemic shunts Code
MELDna Hyponatremia and Mortality among Patients on the LiverTransplant Waiting List Code
MELD3 MELD 3.0: The Model for End-Stage Liver Disease Updated for the Modern Era Code
UKELD Selection of patients for liver transplantation and allocation of donated livers in the UK Code

Used frameworks

We make extensive use of Weights and Biases (W&B) to log our model performance as well as trained model weights. To run our code, we recommend to create a W&B account (here) if you don't have one already. All code is written in pytorch and pytorch-lightning.

Running experiments

As indicated above, each notebook represents one experiment. The comments provided in the project hierarchy indicate the figure or table, and the specific paper the experiment is presented in. As a sidenote, in order to run simulation experiments (experiments/notebooks/wandb/simulation_tests.ipynb), you will need to have trained relevant inference models if the allocation policy requires them.

Training a new model (e.g. src/organsync/models/organsync_network.py) is done simply as

python -m experiments.models.organsync

(Please run python -m experiments.models.organsync --help to see options). When training is done, the model is automatically uploaded to W&B for use later in the experiments.*

Citing

Please cite our paper and/or code as follows:

@InProceedings{organsync,
  title = 	 {{Learning Queueing Policies for Organ Transplantation Allocation using Interpretable Counterfactual Survival Analysis}},
  author =       {Berrevoets, Jeroen and Alaa, Ahmed M. and Qian, Zhaozhi and Jordon, James and Gimson, Alexander E.S. and van der Schaar, Mihaela},
  booktitle = 	 {Proceedings of the 38th International Conference on Machine Learning},
  pages = 	 {792--802},
  year = 	 {2021},
  editor = 	 {Meila, Marina and Zhang, Tong},
  volume = 	 {139},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {18--24 Jul},
  publisher =    {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v139/berrevoets21a/berrevoets21a.pdf},
  url = 	 {http://proceedings.mlr.press/v139/berrevoets21a.html},
}

* Note that we retrain the models used in TransplantBenefit to give a fair comparison to the other benchmarks, as well as compare on the UNOS data.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

organsync-0.1.5-py3-none-macosx_10_14_x86_64.whl (44.5 kB view details)

Uploaded Python 3 macOS 10.14+ x86-64

organsync-0.1.5-py3-none-any.whl (45.0 kB view details)

Uploaded Python 3

File details

Details for the file organsync-0.1.5-py3-none-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for organsync-0.1.5-py3-none-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 a844b43d57a5478a52a8361cd9a2b0e19fcc5413dccf8818963cf327fe929742
MD5 1ab99273c1f25bb95af228215954a0cf
BLAKE2b-256 f537cd13db1634db7f5ff0b0566ee9baeebfbc038e15402bece104039de1e8b8

See more details on using hashes here.

File details

Details for the file organsync-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: organsync-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 45.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.7.9

File hashes

Hashes for organsync-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 226dd9a088b906021b9c078ff74ea021aad956a81c181b0bdbab20fdbf6705ed
MD5 7ae950a920e151776dc619057e251347
BLAKE2b-256 2e0dc68a1784474dc59c49e8d82ebcf3c8629b35eabb15031c1a2e52cbda70ba

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page