Skip to main content

Interpretable GAM toolkit for insurance pricing — EBM, Neural Additive Models, and Pairwise Interaction Networks

Project description

insurance-gam

PyPI Downloads Python Tests License Open In Colab nbviewer


The problem

GLMs need manual feature engineering to capture non-linear effects. A U-shaped driver age curve requires polynomial terms someone has to specify; a convex NCD discount requires a transformation someone has to choose. Get it wrong and the premium is wrong. Get it right and you have a model that looks well-specified but cannot discover interactions you did not anticipate.

GBMs discover those interactions automatically, but the output — thousands of trees — is not auditable by a pricing committee. A pricing actuary cannot look at a gradient booster and tell you whether the NCD discount curve is actuarially reasonable.

GAMs bridge the gap: each feature gets a smooth non-linear shape function, the output is additive and inspectable factor by factor, and interactions can be represented as pairwise 2D shape functions rather than opaque tree splits.

Blog post: Your Model Is Either Interpretable or Accurate. insurance-gam Refuses That Trade-Off.


Why this library?

The PRA expects Pillar 2 capital models to be interpretable. The FCA expects pricing models to be explainable. A black-box GBM satisfies neither requirement for a UK insurer. This library gives you three production-grade GAM variants — EBM, Neural Additive Model, and Pairwise Interaction Networks — that produce per-feature shape functions a pricing actuary can read, challenge, and sign off.

All three use the same GLM-family loss structure (Poisson, Tweedie, Gamma) with exposure offsets, so their outputs are directly comparable to your existing GLM. The subpackages are independent by design: importing insurance_gam.ebm does not load PyTorch, and vice versa.


Compared to alternatives

Standard GLM GBM (XGBoost/LightGBM) R mgcv interpret.ml EBM standalone insurance-gam
Non-linear shape functions Manual polynomials Yes (opaque) Yes Yes Yes
Per-feature relativity table Yes (linear) No Yes Partial Yes (RelativitiesTable)
Pairwise interactions Manual dummies Yes (opaque) Yes No Yes (PIN)
Poisson/Gamma/Tweedie loss Yes Yes Yes No Yes
Exposure offset Yes Partial Yes No Yes
Python-native Yes Yes No Yes Yes
PRA/FCA-auditable output Yes No Yes Partial Yes

Installation

pip install "insurance-gam[ebm]"     # EBM only (most common)
pip install "insurance-gam[neural]"  # ANAM and PIN (requires PyTorch)
pip install "insurance-gam[all]"     # everything
# or with uv:
uv add "insurance-gam[ebm]"

The three subpackages are independent: insurance_gam.ebm loads interpretML, insurance_gam.anam and insurance_gam.pin load PyTorch. Importing one does not load the other.


Quickstart

uv add "insurance-gam[ebm]"
import numpy as np
import polars as pl
from insurance_gam.ebm import InsuranceEBM, RelativitiesTable

rng = np.random.default_rng(42)
n = 2000

df = pl.DataFrame({
    "driver_age":   rng.integers(17, 75, n).astype(float),
    "vehicle_age":  rng.integers(0, 15, n).astype(float),
    "ncd_years":    rng.integers(0, 9, n).astype(float),
    "annual_miles": rng.integers(3000, 20000, n).astype(float),
    "area":         rng.integers(0, 5, n).astype(float),
})
exposure = rng.uniform(0.3, 1.0, n)
log_rate = (
    -2.5
    + 0.5 * (df["driver_age"].to_numpy() < 25).astype(float)
    - 0.12 * df["ncd_years"].to_numpy()
    + 0.3 * (df["vehicle_age"].to_numpy() > 10).astype(float)
)
y = rng.poisson(np.exp(log_rate) * exposure)

model = InsuranceEBM(loss="poisson", interactions="3x")
model.fit(df[:1600], y[:1600], exposure=exposure[:1600])

rt = RelativitiesTable(model)
print(rt.table("ncd_years"))   # shape_value, relativity — a pricing actuary can read this
print(rt.summary())

What's inside

Three subpackages. Import only the one you need.

insurance_gam.ebm — Explainable Boosting Machine

Wraps interpretML's ExplainableBoostingRegressor with insurance tooling: exposure-aware fit/predict via Poisson/Gamma/Tweedie losses, relativity table extraction, post-fit monotonicity enforcement, and GLM comparison tools.

The RelativitiesTable output is directly readable as a rating factor table: NCD years, driver age, vehicle age, each with an auditable curve you can inspect and challenge factor by factor. No post-hoc SHAP required — the shape functions are the model.

uv add "insurance-gam[ebm]"
from insurance_gam.ebm import InsuranceEBM, RelativitiesTable

model = InsuranceEBM(loss="poisson", interactions="3x")
model.fit(X_train, y_train, exposure=exp_train)

rt = RelativitiesTable(model)
print(rt.table("driver_age"))
print(rt.summary())

insurance_gam.anam — Actuarial Neural Additive Model

Neural Additive Model (Laub, Pho, Wong 2025) adapted for insurance. One MLP subnetwork per feature, additive aggregation, Poisson/Tweedie/Gamma losses, Dykstra-projected monotonicity constraints. Beats GLMs on deviance metrics while producing per-feature shape functions a pricing team can inspect.

uv add "insurance-gam[neural]"
from insurance_gam.anam import ANAM

model = ANAM(
    loss="poisson",
    monotone_increasing=["vehicle_age"],
    n_epochs=100,
)
model.fit(df, y, sample_weight=exposure)
shapes = model.shape_functions()
shapes["vehicle_age"].plot()

insurance_gam.pin — Pairwise Interaction Networks

Neural GA2M (Richman, Scognamiglio, Wüthrich 2025). The prediction decomposes as a sum of pairwise interaction terms — one shared network differentiating all feature pairs by learned interaction tokens. Diagonal terms recover main effects. Captures interactions a GLM would miss while keeping the output interpretable as a sum of 2D shape functions.

uv add "insurance-gam[neural]"
from insurance_gam.pin import PINModel

model = PINModel(
    features={"driver_age": "continuous", "vehicle_age": "continuous",
              "area": 5, "ncd_years": "continuous"},
    loss="poisson",
    max_epochs=200,
)
model.fit(df, y, exposure=exposure)
weights = model.interaction_weights()
effects = model.main_effects(df)

Validated performance

On a 50,000-policy synthetic UK motor book with a known non-linear DGP (U-shaped driver age, convex NCD, hard vehicle age threshold, log-miles loading):

Method Gini vs linear GLM Poisson deviance
GLM — linear terms only baseline baseline
GLM — polynomial + manual interaction +3–5pp -2–5%
InsuranceEBM (interactions=3x) +5–15pp -5–12%

EBM finds the U-shaped driver age curve and the convex NCD discount without any feature engineering. On a 10,000-policy benchmark, EBM ranks risks ~28% better than a competent GLM by Gini coefficient.

Known caveat: EBM exposure handling via init_score can produce inflated absolute deviance figures on some DGPs without affecting risk ordering. Use Gini as the primary comparison metric and validate calibration separately. See the benchmark notebook for details.

Full benchmark: benchmarks/run_benchmark_databricks.py. Full validation: notebooks/databricks_validation.py.


PRA/FCA context

The PRA's Supervisory Statement SS3/18 on model risk management expects firms to demonstrate that models are interpretable and that their outputs can be challenged by subject matter experts. The FCA's Consumer Duty requires pricing models to produce outcomes that can be explained to customers and the regulator.

A GBM satisfies neither criterion for a primary pricing model. The GAM shape functions produced by this library are the actuarial equivalent of the factor curves a pricing committee signs off in a traditional GLM tariff review — except they are fitted automatically rather than hand-crafted.


Design choices

Three subpackages, independent imports. Importing insurance_gam.ebm does not load PyTorch. Importing insurance_gam.anam does not load interpretML. This matters in production where you may have one platform with interpretML but not PyTorch.

Exposure-aware throughout. All subpackages accept an exposure parameter and use it correctly in the loss function. This is the same GLM family structure pricing teams already use — model outputs are directly comparable to your existing GLM.

No post-hoc explainability. The shape functions are the model. You do not need SHAP values to explain why the model charges what it charges.


Limitations

  • Below 5,000 policies the EBM boosting procedure can overfit individual bins. Use a GLM below this threshold.
  • EBM's RelativitiesTable is extracted from additive log-scale contributions, not multiplicative rating factors. The conversion is an approximation when EBM has learnt interaction terms. Cross-validate segment A/E ratios before implementing derived factors in a production tariff.
  • ANAM and PINModel require PyTorch. Fit time on CPU without GPU: 10–30 minutes on complex datasets. EBM fits in 60–120 seconds on a single CPU.
  • Monotonicity constraints in ANAM use Dykstra projection. Enforcing monotonicity on a factor that genuinely has non-monotone structure (e.g. declaring driver_age monotone when the U-shape is real) will misfit the model.

Part of the Burning Cost stack

Takes smoothed exposure curves from insurance-whittaker or raw rating factors directly. Feeds fitted tariff models into insurance-conformal, insurance-fairness, and insurance-monitoring. See the full stack

Library Description
insurance-whittaker Rating table smoothing — smoothed Whittaker curves feed into GAM as calibrated inputs
insurance-fairness FCA proxy discrimination auditing — shape functions make it easier to isolate proxy effects
insurance-monitoring Model drift detection — tracks whether GAM shape functions remain calibrated over time
insurance-causal DML causal inference — establishes whether non-linear effects are genuinely causal
insurance-conformal Distribution-free prediction intervals — uncertainty quantification around GAM predictions
insurance-governance Model validation and MRM governance — sign-off pack for GAM models entering production

References

  • Laub, Pho, Wong (2025). "An Interpretable Deep Learning Model for General Insurance Pricing." arXiv:2509.08467.
  • Richman, Scognamiglio, Wüthrich (2025). "Tree-like Pairwise Interaction Networks." arXiv:2508.15678.
  • Lou, Caruana, Gehrke, Hooker (2013). "Accurate intelligible models with pairwise interactions." KDD.

Community

Licence

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

insurance_gam-0.1.9.tar.gz (667.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

insurance_gam-0.1.9-py3-none-any.whl (77.6 kB view details)

Uploaded Python 3

File details

Details for the file insurance_gam-0.1.9.tar.gz.

File metadata

  • Download URL: insurance_gam-0.1.9.tar.gz
  • Upload date:
  • Size: 667.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for insurance_gam-0.1.9.tar.gz
Algorithm Hash digest
SHA256 2c391abb71f2cc6e156f4847abe6bd1389f537b8f420054d280e0899e6e2ed5c
MD5 853e6aaaf83ad053d5a44f42e4bdd044
BLAKE2b-256 6a64dda657c45ed0953a205f8882e88e9d918c25bf0b37d6919ffa8ebdaf52e8

See more details on using hashes here.

File details

Details for the file insurance_gam-0.1.9-py3-none-any.whl.

File metadata

  • Download URL: insurance_gam-0.1.9-py3-none-any.whl
  • Upload date:
  • Size: 77.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for insurance_gam-0.1.9-py3-none-any.whl
Algorithm Hash digest
SHA256 3a0a6ac050a4e25ad4e0899574b5750d229762a0e922ef3363c718dac7f03155
MD5 e6f157566b4ba91ea9a9402ced8e4c12
BLAKE2b-256 0adbb510ffe09d1408602e901674ff363b0ac699ba9e0515ce636c4f392b472f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page