Cross-framework causal-ML ensembles for ATE and CATE with honest bootstrap inference
Project description
MetaCausal
Cross-framework ensembling of causal machine-learning estimators for ATE and pointwise CATE, with honest bootstrap inference.
What it is
MetaCausal orchestrates multiple causal-ML estimators from different libraries — EconML, DoubleML, CausalML, stochtree, or arbitrary user-supplied callables — behind a single protocol, and aggregates their treatment-effect estimates into a single ensemble estimate. Seven aggregation strategies are provided, grouped into three tiers:
- Pointwise robust — Median (default), Mean, Trimmed Mean.
- Agreement-based — Consensus Based Averaging, which selects a high-agreement subset of components from pairwise Kendall's τ.
- Outcome-supervised — Causal Stacking, R-Stacking, Q-Aggregation, which learn weights by optimising a causal loss on cross-fitted out-of-fold predictions.
A full-pipeline honest bootstrap supplies comparable confidence intervals for both ATE and pointwise CATE across heterogeneous components whose native inference machinery is otherwise incomparable.
Why
No single causal-ML estimator dominates across data-generating processes, model selection for heterogeneous treatment effects is empirically unreliable, and individual methods can fail catastrophically under specific violations of their own assumptions (overlap breakdown, nuisance misspecification, tree extrapolation). MetaCausal's default pointwise median aggregation gives a 50% breakdown point with no tuning — up to half the component estimators can produce arbitrarily bad estimates without corrupting the ensemble. When outcome data allow learning weights, MetaCausal also ships the three outcome-supervised stackers from the recent CATE-ensemble literature.
Installation
pip install metacausal
This installs the core package and its required dependencies (numpy, pandas, scipy). Estimator libraries are optional extras:
# Individual libraries
pip install "metacausal[econml]"
pip install "metacausal[doubleml]"
pip install "metacausal[causalml]"
pip install "metacausal[stochtree]"
# Visualisation helpers (matplotlib)
pip install "metacausal[plots]"
# Everything (frameworks + plots)
pip install "metacausal[all]"
Python 3.11 or later is required.
Quick start
from metacausal import CausalEnsemble
from metacausal.datasets import load_lalonde
X, T, Y = load_lalonde()
# Default ensemble: ten estimators spanning EconML, DoubleML,
# CausalML, and stochtree, aggregated by pointwise median.
ens = CausalEnsemble()
ens.fit(X, T, Y, random_state=42)
# Point estimate
ate = ens.ate()
print(f"Ensemble ATE: {ate.ate:.1f}")
for name, est in ate.component_estimates.items():
print(f" {name:<25} {est.ate:>9.1f}")
# Honest bootstrap confidence interval
boot = ens.bootstrap(n_boot=200, random_state=42, n_jobs=-1)
print(f"95% CI: [{boot.ate_ci_lower:.1f}, {boot.ate_ci_upper:.1f}]")
The three-step fit → ate / cate → bootstrap pattern is the recommended one, because it lets you inspect intermediate state and swap aggregation strategies on an already-fitted ensemble. The convenience wrapper ens.estimate(X, T, Y, n_boot=200, ...) does fit + bootstrap (or fit + ate) in a single call.
Aggregation strategies at a glance
| Tier | Strategy | String alias / class | Data used |
|---|---|---|---|
| Pointwise | Median (default) | "median" / Median |
Component predictions only |
| Pointwise | Mean | "mean" / Mean |
Component predictions only |
| Pointwise | Trimmed Mean | "trimmed_mean" / TrimmedMean |
Component predictions only |
| Agreement | Consensus Based Averaging | "cba" / CBA |
Component CATE predictions on training data |
| Supervised | Causal Stacking | CausalStacking |
Cross-fitted OOF predictions + nuisance |
| Supervised | R-Stacking | RStacking |
Cross-fitted OOF predictions + nuisance |
| Supervised | Q-Aggregation | QAggregation |
Cross-fitted OOF predictions + nuisance |
# By string alias (default configuration)
ens = CausalEnsemble(aggregation="trimmed_mean")
# By object (lets you configure hyperparameters)
from metacausal.aggregation import QAggregation
ens = CausalEnsemble(aggregation=QAggregation(nu=0.5, greedy=True))
See the accompanying paper (forthcoming) for the mathematical details of each strategy.
Outcome types
MetaCausal supports two outcome types:
- Continuous (default) — any numeric Y not detected as binary. Quietly absorbs counts, bounded continuous, and ordinal-as-numeric. The base learner choice is the user's responsibility (an
HistGradientBoostingRegressorby default; user-supplied components can pass a Poisson booster if appropriate). - Binary — numeric Y with values ⊆ {0, 1} or boolean dtype. The estimand is the risk difference ATE/CATE (mean difference of probabilities).
Detection happens at fit time: CausalEnsemble().fit(X, T, Y) inspects Y, picks the right pool from default_methods, and routes nuisance estimation through predict_proba for binary outcomes. To force an interpretation, pass outcome_type="continuous" or outcome_type="binary" at construction. Multi-class / nominal and survival outcomes are out of scope; encoding-as-multiple-binary or a dedicated survival library is the recommended path.
The default binary pool (8 components) drops DoubleMLPLR, EconML S/T/X-Learners, the BaseRRegressor, and stochtree BCF — all of which either lack a binary-capable code path in their upstream library or would silently fit a linear-probability model — and substitutes the CausalML classifier siblings (BaseSClassifier, BaseTClassifier, BaseXClassifier, BaseRClassifier). DoubleMLIRM, CausalForestDML, DRLearner, and TMLELearner remain.
Usage recipes
Mixed-framework method list
Estimators from EconML and CausalML are auto-detected by module prefix; DoubleML, stochtree, and arbitrary callables go through explicit adapters.
from metacausal import CausalEnsemble, GenericATEAdapter
from metacausal.adapters import DoubleMLAdapter, CausalMLAdapter
from econml.dml import CausalForestDML
from econml.metalearners import TLearner, XLearner
from doubleml import DoubleMLIRM
from causalml.inference.meta import BaseDRRegressor
from sklearn.ensemble import (
HistGradientBoostingRegressor as HGBR,
HistGradientBoostingClassifier as HGBC,
)
def naive_diff(X, T, Y):
return float(Y[T == 1].mean() - Y[T == 0].mean())
ens = CausalEnsemble(
methods=[
CausalForestDML(discrete_treatment=True), # auto-wrapped (EconML)
TLearner(models=HGBR()), # auto-wrapped (EconML)
XLearner(models=HGBR(), propensity_model=HGBC()), # auto-wrapped (EconML)
DoubleMLAdapter(DoubleMLIRM, ml_g=HGBR(), ml_m=HGBC()),
CausalMLAdapter(BaseDRRegressor(learner=HGBR())),
GenericATEAdapter(naive_diff, name="naive_diff"),
],
aggregation="median",
)
ens.fit(X, T, Y, random_state=42)
print(ens.ate().ate)
Binary outcome on real data
load_lalonde(binarize_y=...) returns the 1978-earnings outcome as a binary indicator — "median" for the (~50/50) above-median split, "positive" for the (~69/31) "any 1978 earnings" indicator. Useful as a real-data fixture without leaving the package.
from metacausal import CausalEnsemble
from metacausal.datasets import load_lalonde
X, T, Y = load_lalonde(binarize_y="median")
# outcome_type="auto" detects binary Y, materialises the binary
# default pool (8 components targeting the risk difference), and
# fits. ATE is on the risk-difference scale, in [-1, 1].
ens = CausalEnsemble()
ens.fit(X, T, Y, random_state=42)
print(ens.ate().ate)
CATE estimation with a supervised strategy
from metacausal import CausalEnsemble
from metacausal.aggregation import CausalStacking
ens = CausalEnsemble(aggregation=CausalStacking())
ens.fit(X, T, Y, random_state=42)
# Pointwise CATE CIs on a held-out grid
boot = ens.bootstrap(X_eval, n_boot=200, random_state=42, n_jobs=-1)
print(boot.cate) # ensemble CATE at X_eval, shape (n_eval,)
print(boot.cate_ci_lower) # pointwise 95% lower bound
print(boot.cate_ci_upper) # pointwise 95% upper bound
# Inspect the learned ensemble weights
for name, w in zip(boot.ensemble_weights.model_names,
boot.ensemble_weights.weights):
print(f" {name:<25} {w:>6.3f}")
Compare aggregation strategies without refitting
An aggregation=... argument to ate() or cate() re-aggregates from cached predictions without refitting components — useful for quick comparisons.
ens = CausalEnsemble(aggregation="median")
ens.fit(X, T, Y, random_state=42)
for agg in ["median", "mean", "trimmed_mean", "cba"]:
ate = ens.ate(aggregation=agg)
print(f"{agg:<15} ATE = {ate.ate:.1f}")
Visualisation helpers
The optional metacausal.plots submodule (installed via the [plots] extra) provides four matplotlib helpers that consume the result types above:
forest(boot)— component and ensemble ATEs with bootstrap CIs.weights(ens)— aggregation weight bars (agreement-based and supervised strategies).cate_profile(source, x, xlabel=...)— ensemble CATE along one covariate, with optional bootstrap band and per-component overlay.disagreement(ens, X)— pairwise component-CATE rank-correlation heatmap.
from metacausal.plots import forest, cate_profile
forest(boot)
cate_profile(boot, x=grid, xlabel="re74 (1974 earnings, USD)")
Extending MetaCausal
MetaCausal exposes five injection points that let researchers extend the package without forking it: custom component adapters, custom aggregation strategies, replacement nuisance pipelines (fit_nuisance_fn), replacement pseudo-outcome functions (pseudo_outcome_fn), and custom cross-fitting splitters. The accompanying paper (forthcoming) covers each injection point in detail.
The lowest-effort path for adding a new estimator is GenericCATEAdapter, which wraps a fit function, a CATE prediction function, and (optionally) an ATE prediction function into a component without implementing the full protocol:
from metacausal import CausalEnsemble, GenericCATEAdapter
def fit_fn(X, T, Y, **kwargs):
# Train your model and return any state you need.
...
return state
def cate_fn(state, X):
# Return per-observation CATE estimates, shape (n,).
return state.predict_cate(X)
def ate_fn(state, X): # optional; defaults to mean of cate_fn(state, X)
return float(cate_fn(state, X).mean())
my_method = GenericCATEAdapter(
fit_fn, cate_fn, fn_ate=ate_fn, name="my_method",
)
ens = CausalEnsemble(methods=[my_method, ...])
Reproducibility and parallelism
A single random_state seed deterministically propagates to every stochastic sub-step — component models, their sub-estimators, cross-fitting folds, nuisance fits, and bootstrap replicates — so reruns are bit-identical.
A single n_jobs knob on fit, bootstrap, and estimate routes parallelism to the outermost applicable level (bootstrap replicates when n_boot > 0; otherwise supervised cross-fitting or component fits) and pins BLAS/OpenMP threads inside each worker to prevent oversubscription. The accompanying paper (forthcoming) explains the rationale.
The outer process (your main script) keeps the platform-default BLAS thread count, which is fine on macOS and Windows. On Linux, where joblib's loky backend can occasionally deadlock at fork time when the parent's BLAS pool is already running threads, defensive users may want to set the standard thread env vars (OMP_NUM_THREADS=1, OPENBLAS_NUM_THREADS=1, MKL_NUM_THREADS=1, NUMEXPR_NUM_THREADS=1, VECLIB_MAXIMUM_THREADS=1) before invoking Python. The bundled replication runner and the test suite's tests/conftest.py set these automatically, so reviewers and contributors do not need the shell prefix.
# Parallelise supervised cross-fitting, deterministic:
ens = CausalEnsemble(aggregation=CausalStacking())
ens.fit(X, T, Y, random_state=42, n_jobs=-1)
# Or: full fit + bootstrap pipeline with bootstrap-level parallelism:
boot = ens.estimate(X, T, Y, n_boot=500, random_state=42, n_jobs=-1)
Citation
A BibTeX entry will be added here when the arXiv preprint of the accompanying manuscript is posted. For interim references to the software itself, see the PyPI listing.
Further reading
- Paper: a preprint covering the methodology, architecture, and extensibility hooks is in preparation. An arXiv link will be added here once it is posted.
- Replication material: will be included as ancillary files with the forthcoming arXiv submission.
Release notes
0.2.2 — 2026-05-04
- Dependency hygiene: the four causal-ML extras (
econml,doubleml,stochtree,causalml) are now pinned to a single patch each — floors at the exact version validated by CI on the most recent main-branch run, caps at the next patch. Motivated by stochtree #376, where a patch release (0.4.0 → 0.4.2) silently changed the semantics ofBCFModel.predict(terms="tau")and broke 0.2.0. Patch-level caps mean every upstream release lands outside the cap, triggers a Dependabot PR, and runspytest -m integrationbefore we widen — closing the silent-install hole that produced the 0.2.1 hotfix.
0.2.1 — 2026-05-04
- Bug fix:
StochtreeAdapternow callsBCFModel.predict(..., terms="cate")instead ofterms="tau". Withstochtree 0.4.2(which added a parametric treatment-intercept term in the BCF sampler),terms="tau"returned the forest-only piece and excluded the parametric component, producing wildly seed-sensitive ATEs that disagreed sharply with the rest of the default ensemble.terms="cate"returns the full conditional treatment effect — including parametric and random-slope components, when present — for any BCF configuration. Fixes upstream issue stochtree #376 on the metacausal side.
0.2.0 — 2026-05-04
New
- Outcome-type handling:
CausalEnsembleauto-detects continuous vs binaryYatfit(), materialises the right default pool, and routes nuisance throughpredict_probafor binary. Override viaoutcome_type="continuous"|"binary". Publicmetacausal.infer_outcome_type(Y)utility;binarize_y={"median","positive"}onload_lalonde(). - Subsample bootstrap (
bootstrap(method="subsample")): m-out-of-n without replacement, T-stratified, with Politis–Romano scaled-percentile CIs. Eliminates duplicate-unit leakage across cross-fit folds. - Structured warning hierarchy:
ComponentFailureWarning,ComponentExclusionWarning,BootstrapWarningunder a commonMetaCausalWarningumbrella. CausalMLAdapteraccepts apropensity_model=kwarg, forwarding a fitted propensity to non-TMLE meta-learners.
Breaking changes for custom-strategy / custom-adapter authors
AggregationStrategyand family areabc.ABCwith a unifiedaggregateentry point. Subclasses now implementaggregaterather than per-mode methods.- Every adapter must declare
supported_outcome_typesand implementvalidate_outcome_type(detected); the injectablefit_nuisance_fngains anoutcome_typeparameter.
Other
- Bounded version constraints on
econml,doubleml,causalml,stochtree(capped at next minor; floors anchored to tested versions). requires-pythonraised to>=3.11(causalml 0.16 floor).- Tier-2 integration tests via
pytest -m integration. - Bug fixes:
load_lalondeno longer leaks a file handle;EconMLAdaptersuppresses the upstreamDataConversionWarningfromDRLearner(discrete_outcome=True). - PyPI metadata polish (classifiers, license badge).
Tested against: doubleml 0.11.2, econml 0.16.0, causalml 0.16.0, stochtree 0.4.0.
0.1.0 — 2026-04-25
Initial public release.
License
MetaCausal is distributed under the MIT License. See LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file metacausal-0.2.2.tar.gz.
File metadata
- Download URL: metacausal-0.2.2.tar.gz
- Upload date:
- Size: 137.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e463f226a2d2fe5f0b6684fd10860497e796c4af369fbbe483974855c004e225
|
|
| MD5 |
0a6e7c8f5717bcf82d5fc7bffb6e69c8
|
|
| BLAKE2b-256 |
f49077cfc2d447567c494de222a52932cdeb04fa466f88017bcc5527e26be980
|
Provenance
The following attestation bundles were made for metacausal-0.2.2.tar.gz:
Publisher:
publish.yml on asmahani/metacausal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
metacausal-0.2.2.tar.gz -
Subject digest:
e463f226a2d2fe5f0b6684fd10860497e796c4af369fbbe483974855c004e225 - Sigstore transparency entry: 1439047897
- Sigstore integration time:
-
Permalink:
asmahani/metacausal@d2907340694aa77f57a4af4f25a5b3b65b4546b5 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/asmahani
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d2907340694aa77f57a4af4f25a5b3b65b4546b5 -
Trigger Event:
push
-
Statement type:
File details
Details for the file metacausal-0.2.2-py3-none-any.whl.
File metadata
- Download URL: metacausal-0.2.2-py3-none-any.whl
- Upload date:
- Size: 87.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
069835424d67f7cb65e174350b644ac5c003d9fdb0e23be18360cbe2041e5a3d
|
|
| MD5 |
bd03414570ac57c01639a071d59ea3bf
|
|
| BLAKE2b-256 |
440d907eff09cc61a3dc466eca82dbb65fabdd6ce1f96481932378898be796c1
|
Provenance
The following attestation bundles were made for metacausal-0.2.2-py3-none-any.whl:
Publisher:
publish.yml on asmahani/metacausal
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
metacausal-0.2.2-py3-none-any.whl -
Subject digest:
069835424d67f7cb65e174350b644ac5c003d9fdb0e23be18360cbe2041e5a3d - Sigstore transparency entry: 1439047917
- Sigstore integration time:
-
Permalink:
asmahani/metacausal@d2907340694aa77f57a4af4f25a5b3b65b4546b5 -
Branch / Tag:
refs/tags/v0.2.2 - Owner: https://github.com/asmahani
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@d2907340694aa77f57a4af4f25a5b3b65b4546b5 -
Trigger Event:
push
-
Statement type: