Skip to main content

Conditional Average Treatment Effect Estimation Using Neural Networks

Project description

CATENets - Conditional Average Treatment Effect Estimation Using Neural Networks

CATENets Tests Documentation Status License

Code Author: Alicia Curth (amc253@cam.ac.uk)

This repo contains Jax-based, sklearn-style implementations of Neural Network-based Conditional Average Treatment Effect (CATE) Estimators, which were used in the AISTATS21 paper 'Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms' (Curth & vd Schaar, 2021a) as well as the follow up NeurIPS21 paper "On Inductive Biases for Heterogeneous Treatment Effect Estimation" (Curth & vd Schaar, 2021b) and the NeurIPS21 Datasets & Benchmarks track paper "Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation" (Curth et al, 2021).

We implement the SNet-class we introduce in Curth & vd Schaar (2021a), as well as FlexTENet and OffsetNet as discussed in Curth & vd Schaar (2021b), and re-implement a number of NN-based algorithms from existing literature (Shalit et al (2017), Shi et al (2019), Hassanpour & Greiner (2020)). We also provide Neural Network (NN)-based instantiations of a number of so-called meta-learners for CATE estimation, including two-step pseudo-outcome regression estimators (the DR-learner (Kennedy, 2020) and single-robust propensity-weighted (PW) and regression-adjusted (RA) learners), Nie & Wager (2017)'s R-learner and Kuenzel et al (2019)'s X-learner. The jax implementations in catenets.models.jax were used in all papers listed; additionally, pytorch versions of some models (catenets.models.torch) were contributed by Bogdan Cebere.

Interface

The repo contains a package catenets, which contains all general code used for modeling and evaluation, and a folder experiments, in which the code for replicating experimental results is contained. All implemented learning algorithms in catenets (SNet, FlexTENet, OffsetNet, TNet, SNet1 (TARNet), SNet2 (DragonNet), SNet3, DRNet, RANet, PWNet, RNet, XNet) come with a sklearn-style wrapper, implementing a .fit(X, y, w) and a .predict(X) method, where predict returns CATE by default. All hyperparameters are documented in detail in the respective files in catenets.models folder.

Example usage:

from catenets.models.jax import TNet, SNet
from catenets.experiment_utils.simulation_utils import simulate_treatment_setup

# simulate some data (here: unconfounded, 10 prognostic variables and 5 predictive variables)
X, y, w, p, cate = simulate_treatment_setup(n=2000, n_o=10, n_t=5, n_c=0)

# estimate CATE using TNet
t = TNet()
t.fit(X, y, w)
cate_pred_t = t.predict(X)  # without potential outcomes
cate_pred_t, po0_pred_t, po1_pred_t = t.predict(X, return_po=True)  # predict potential outcomes too

# estimate CATE using SNet
s = SNet(penalty_orthogonal=0.01)
s.fit(X, y, w)
cate_pred_s = s.predict(X)

All experiments in Curth & vd Schaar (2021a) can be replicated using this repository; the necessary code is in experiments.experiments_AISTATS21. To do so from shell, clone the repo, create a new virtual environment and run

pip install -r requirements.txt #install requirements
python run_experiments_AISTATS.py
Options:
--experiment # defaults to 'simulation', 'ihdp' will run ihdp experiments
--setting # different simulation settings in synthetic experiments (can be 1-5)
--models # defaults to None which will train all models considered in paper,
         # can be string of model name (e.g 'TNet'), 'plug' for all plugin models,
         # 'pseudo' for all pseudo-outcome regression models

--file_name # base file name to write to, defaults to 'results'
--n_repeats # number of experiments to run for each configuration, defaults to 10 (should be set to 100 for IHDP)

Similarly, the experiments in Curth & vd Schaar (2021b) can be replicated using the code in experiments.experiments_inductivebias_NeurIPS21 (or from shell using python run_experiments_inductive_bias_NeurIPS.py) and the experiments in Curth et al (2021) can be replicated using the code in experiments.experiments_benchmarks_NeurIPS21 (the catenets experiments can also be run from shell using python run_experiments_benchmarks_NeurIPS).

The code can also be installed as a python package (catenets). From a local copy of the repo, run python setup.py install.

Note: jax is currently only supported on macOS and linux, but can be run from windows using WSL (the windows subsystem for linux).

Citing

If you use this software please cite the corresponding paper(s):

@inproceedings{curth2021nonparametric,
  title={Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory to Learning Algorithms},
  author={Curth, Alicia and van der Schaar, Mihaela},
    year={2021},
  booktitle={Proceedings of the 24th International Conference on Artificial
  Intelligence and Statistics (AISTATS)},
  organization={PMLR}
}

@article{curth2021inductive,
  title={On Inductive Biases for Heterogeneous Treatment Effect Estimation},
  author={Curth, Alicia and van der Schaar, Mihaela},
  booktitle={Proceedings of the Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021}
}


@article{curth2021really,
  title={Really Doing Great at Estimating CATE? A Critical Look at ML Benchmarking Practices in Treatment Effect Estimation},
  author={Curth, Alicia and Svensson, David and Weatherall, James and van der Schaar, Mihaela},
  booktitle={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks},
  year={2021}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

catenets-0.2.3-py3-none-macosx_10_14_x86_64.whl (130.2 kB view details)

Uploaded Python 3 macOS 10.14+ x86-64

catenets-0.2.3-py3-none-any.whl (131.3 kB view details)

Uploaded Python 3

File details

Details for the file catenets-0.2.3-py3-none-macosx_10_14_x86_64.whl.

File metadata

File hashes

Hashes for catenets-0.2.3-py3-none-macosx_10_14_x86_64.whl
Algorithm Hash digest
SHA256 beee8538fa5e4c18f5b2e78b850fabea0d5091e7950f4fed8441481d08738569
MD5 c6d794dce6ec574b0fb10c02ee36b11e
BLAKE2b-256 51169b83a987c878dc3a41bc44011e314a1602b8e9fde4fa681405380cfb076d

See more details on using hashes here.

File details

Details for the file catenets-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: catenets-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 131.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.10

File hashes

Hashes for catenets-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 ee7725d5fd0132ed8b31faa2e2be1adc193b37113b0765ee1bf9efea15a47788
MD5 aa70b8d9673fe7af4309f6caf3ec9c36
BLAKE2b-256 b1b8527a3ffdc5b3e79df34fed92530b3b86f70e9c362a4d3bc7f9f98efb8f50

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page