Sarmanov copula joint frequency-severity modelling for UK personal lines insurance, with neural two-part dependent model
Project description
insurance-frequency-severity
Every UK motor pricing team multiplies a Poisson GLM by a Gamma GLM and calls it pure premium — but the NCD structure suppresses borderline claims, creating a systematic negative correlation between claim count and average severity that the multiplication ignores. insurance-frequency-severity estimates that dependence using the Sarmanov bivariate distribution, which handles the discrete-continuous mixed margins problem correctly, and produces per-policy correction factors without the PIT approximation issues of standard copula approaches.
Merged from: insurance-frequency-severity (Sarmanov/Gaussian copula) and insurance-dependent-fs (neural two-part model).
Blog post: Your Frequency-Severity Independence Assumption Is Costing You Premium
Challenges the independence assumption in the standard two-model GLM framework. Your frequency GLM and severity GLM are correct. The problem is multiplying their predictions together as though claim count and average severity are unrelated — they are not.
Part of the Burning Cost stack
Takes claims data and your existing fitted statsmodels GLM objects for frequency and severity. Feeds Sarmanov-corrected joint premium estimates into insurance-optimise (more accurate pure premium inputs) and insurance-conformal (uncertainty quantification on the corrected predictions). → See the full stack
Why use this?
- The standard UK motor pricing approach (pure premium = E[N] × E[S]) assumes frequency and severity are independent given rating factors — they are not. NCD structure suppresses borderline claims, creating a systematic negative correlation. Vernic, Bolancé & Alemany (2022) found this mismeasurement costs €5–55+ per policyholder; the directional effect in UK motor is the same.
- The Sarmanov copula handles the discrete-continuous mixed margins problem correctly — no probability integral transform approximation for the count margin, which is not well-defined for discrete distributions. The Gaussian copula comparison and Garrido conditional fallback are also included so you can present the methodology choice to a pricing committee.
- IFM estimation: you plug in your already-fitted statsmodels GLM objects. There is no need to refit the marginals — the library estimates the dependence parameter omega on top of your existing models, and returns analytical (closed-form) correction factors per policy at scoring time.
- DependenceTest first: run the permutation test for independence before committing to a correction. If the test does not reject, use the simpler independent model. The benchmark shows that even when omega is not statistically significant, the correction can absorb marginal model error (28.6% MAE improvement on the benchmark DGP).
- Generates a JointModelReport HTML document (omega estimate, CI, Spearman rho, AIC/BIC comparison, correction factor distribution) suitable for a pricing committee or model validation pack.
The problem
Every UK motor pricing team runs two GLMs:
Pure premium = E[N|x] × E[S|x]
This assumes N and S are independent given rating factors x. The assumption is almost certainly wrong. In UK motor, the No Claims Discount structure suppresses borderline claims: policyholders with frequent small claims are aware of the NCD threshold and do not report near-miss incidents. The result is a systematic negative correlation between claim count and average severity.
Vernic, Bolancé, and Alemany (2022) found this mismeasurement amounts to €5–55+ per policyholder on a Spanish auto book. The directional effect in UK motor is the same; the magnitude depends on your book.
This library gives you three methods to measure and correct for it:
-
Sarmanov copula (primary): Bivariate Sarmanov distribution for NB/Poisson frequency × Gamma/Lognormal severity. Handles the discrete-continuous mixed margins problem correctly — no probability integral transform approximation needed for the count margin. IFM estimation: you plug in your fitted GLM objects, we estimate omega.
-
Gaussian copula (comparison): Standard approach from Czado et al. (2012). Uses PIT approximation for the discrete margin. Good for presenting rho in familiar terms.
-
Garrido conditional (fallback): Adds N as a covariate in the severity GLM. No copula, no new methodology — just a single extra GLM parameter. Works on smaller books where omega estimation would be unreliable.
Installation
uv add insurance-frequency-severity
Questions or feedback? Start a Discussion. Found it useful? A star helps others find it.
Expected Performance
Validated on a 30,000-policy synthetic UK motor book with planted positive Sarmanov dependence (omega=3.5). Results from notebooks/databricks_validation.py — pure Sarmanov DGP with known omega, so the IFM estimator targets the planted parameter directly.
Independence vs Sarmanov copula:
| Metric | Independence | Empirical CF | Sarmanov copula |
|---|---|---|---|
| MAE vs oracle (lower better) | baseline | partial | best |
| Portfolio premium bias | -3% to -8% | ~0% | ~0% |
| Segment ranking (high-risk decile) | wrong | wrong | correct |
| Omega recovery | — | — | within 20% |
| Fit time | <1s | <1s | <1s |
- Portfolio bias: The independence model understates aggregate expected loss cost by 3-8% when omega is moderate-positive. This is not a rounding error — it is systematic mispricing that concentrates in your highest-risk accounts.
- Empirical correction factor: Fixes the aggregate bias (applying a flat scalar) but does not correct the segment-level ordering. High-risk decile policies are still underpriced relative to low-risk after the empirical correction.
- Sarmanov copula: Recovers the planted omega within 20% and produces per-policy correction factors that correctly re-rank segments. The correction is analytical (no simulation), so scoring is as fast as the independence model.
- Omega recovery: IFM is asymptotically unbiased on pure Sarmanov data. With 21k training policies and 2k+ claims, the relative error is typically 10-20%.
- Where the bias concentrates: Top risk decile (high-frequency, high-severity commercial risks). The correction factor for the top decile is 1.05-1.15× — 5-15% additional premium needed versus what the independence model charges.
The full validation notebook is at notebooks/databricks_validation.py. The DGP takes 3-5 minutes to generate (30k per-policy Sarmanov samples); the fit itself takes under 1 second.
Quickstart
import numpy as np
import pandas as pd
import statsmodels.api as sm
from insurance_frequency_severity import JointFreqSev, DependenceTest
rng = np.random.default_rng(42)
n_policies = 5000
# Synthetic motor book: claim count and average severity per policy
claim_count = rng.poisson(0.10, size=n_policies)
avg_severity = np.where(
claim_count > 0,
rng.gamma(shape=3.0, scale=800.0, size=n_policies),
np.nan,
)
X = np.column_stack([
rng.normal(35, 8, n_policies), # age
rng.normal(5, 2, n_policies), # ncb
])
claims_df = pd.DataFrame({
"claim_count": claim_count,
"avg_severity": avg_severity,
})
# Fit marginal GLMs
X_df = pd.DataFrame(X, columns=["age", "ncb"])
X_const = sm.add_constant(X_df)
my_nb_glm = sm.GLM(
claim_count, X_const, family=sm.families.NegativeBinomial(alpha=0.8)
).fit()
claims_mask = claim_count > 0
my_gamma_glm = sm.GLM(
avg_severity[claims_mask],
X_const[claims_mask],
family=sm.families.Gamma(link=sm.families.links.Log()),
).fit()
# Test for dependence first
test = DependenceTest()
test.fit(n=claim_count[claims_mask], s=avg_severity[claims_mask])
print(test.summary())
# Fit joint model — accepts your existing fitted GLMs
model = JointFreqSev(
freq_glm=my_nb_glm, # fitted statsmodels NegativeBinomial GLM
sev_glm=my_gamma_glm, # fitted statsmodels Gamma GLM
copula="sarmanov",
)
model.fit(
claims_df,
n_col="claim_count",
s_col="avg_severity",
)
# Check dependence parameter and confidence interval
print(model.dependence_summary())
# Get correction factors for your in-force book
corrections = model.premium_correction()
print(corrections[["mu_n", "mu_s", "correction_factor", "premium_joint"]].describe())
GLM compatibility
This library is designed for statsmodels GLM objects. It detects marginal families via model.family (statsmodels convention) and extracts dispersion from model.scale. Non-statsmodels objects with .predict() and .fittedvalues may work, but kernel parameters will be inferred from statsmodels-specific attributes and could silently produce wrong results. For non-statsmodels GLMs, pass parameter dictionaries directly.
# Works with statsmodels GLM results
import statsmodels.api as sm
import numpy as np
import pandas as pd
rng = np.random.default_rng(0)
n = 3000
X = pd.DataFrame({"age": rng.normal(35, 8, n), "ncb": rng.normal(5, 2, n)})
X_const = sm.add_constant(X)
y = rng.poisson(0.10, size=n)
claims_mask = y > 0
s = rng.gamma(3.0, 800.0, size=n)
nb_glm = sm.GLM(y, X_const, family=sm.families.NegativeBinomial(alpha=0.8)).fit()
gamma_glm = sm.GLM(
s[claims_mask],
X_const[claims_mask],
family=sm.families.Gamma(link=sm.families.links.Log()),
).fit()
model = JointFreqSev(freq_glm=nb_glm, sev_glm=gamma_glm)
Methods
JointFreqSev
model = JointFreqSev(freq_glm, sev_glm, copula="sarmanov")
model.fit(data, n_col, s_col, method="ifm") # IFM or MLE
model.premium_correction() # DataFrame with correction factors
model.loss_cost(X_new) # Corrected pure premium for new data
model.dependence_summary() # omega, CI, Spearman rho, AIC/BIC
# Note: for copula="gaussian" or "fgm", premium_correction() returns a single
# portfolio-average correction factor applied to all policies. Per-policy
# analytical corrections are available with copula="sarmanov" only.
ConditionalFreqSev (Garrido 2016)
from insurance_frequency_severity import ConditionalFreqSev
model = ConditionalFreqSev(freq_glm, sev_glm_base)
model.fit(data, n_col, s_col)
model.premium_correction() # Correction = exp(gamma) * exp(mu_n * (exp(gamma) - 1))
Diagnostics
import numpy as np
import pandas as pd
import statsmodels.api as sm
from insurance_frequency_severity import DependenceTest, compare_copulas, JointFreqSev
rng = np.random.default_rng(0)
n_policies = 5000
n = rng.poisson(0.10, size=n_policies)
s = np.where(n > 0, rng.gamma(3.0, 800.0, size=n_policies), np.nan)
X = pd.DataFrame({"age": rng.normal(35, 8, n_policies)})
X_const = sm.add_constant(X)
freq_glm = sm.GLM(n, X_const, family=sm.families.Poisson()).fit()
claims_mask = n > 0
sev_glm = sm.GLM(
s[claims_mask], X_const[claims_mask],
family=sm.families.Gamma(link=sm.families.links.Log()),
).fit()
n_positive = n[claims_mask]
s_positive = s[claims_mask]
# Test independence
test = DependenceTest(n_permutations=1000)
test.fit(n_positive, s_positive)
print(test.summary()) # Kendall tau, Spearman rho, permutation p-values
# AIC/BIC comparison across copula families
comparison = compare_copulas(n, s, freq_glm, sev_glm)
print(comparison) # Sorted by AIC: sarmanov, gaussian, fgm
Report
from insurance_frequency_severity import JointModelReport
report = JointModelReport(model, dependence_test=test, copula_comparison=comparison)
report.to_html(
"pricing_review.html",
n=n,
s=s,
correction_df=corrections,
)
Premium correction interpretation
The correction factor is E[N×S] / (E[N] × E[S]). Values:
< 1.0: negative dependence. High-count policyholders have lower severity than independence predicts. Independence model overstates their risk.= 1.0: independence holds.> 1.0: positive dependence. Rare but valid — e.g., some commercial lines where large customers have both high frequency and high severity.
For UK motor with typical NCD structure, expect the average correction to be 0.93–0.98 (independence overstates the pure premium by 2–7% on average, with larger corrections at the high-frequency tail).
Theoretical background
The Sarmanov bivariate distribution:
f(n, s) = f_N(n) × f_S(s) × [1 + ω × φ₁(n) × φ₂(s)]
where φ₁, φ₂ are bounded kernel functions with zero mean under their respective marginals. When ω=0 this reduces to the product of marginals (independence). The key advantage over standard copulas: no probability integral transform is needed for the discrete frequency margin. Sklar's theorem is not unique for discrete distributions, so the "copula" of a discrete-continuous pair is not well-defined. The Sarmanov family sidesteps this entirely by working directly with the joint distribution.
Spearman's rho range for the Laplace kernel Sarmanov with NB/Gamma margins: [-3/4, 3/4] (Blier-Wong 2026). This comfortably accommodates the moderate negative dependence found in auto insurance data.
The IFM (Inference Functions for Margins) estimator:
- Fit frequency GLM → get E[N|xᵢ] for each policy
- Fit severity GLM → get E[S|xᵢ] for each claiming policy
- Profile likelihood over ω: maximise Σᵢ log[1 + ω × φ₁(nᵢ; μ̂ᴺᵢ) × φ₂(sᵢ; μ̂ˢᵢ)] for observed (nᵢ, sᵢ) with nᵢ > 0
Zero-claim policies contribute no severity information; their likelihood contribution is just f_N(0), which does not depend on ω. So only observed claims inform the dependence estimate.
Data requirements
Stable ω estimation needs approximately 20,000 policyholder-years with at least 2,000 claims. Smaller portfolios will produce wide confidence intervals on ω. The library warns you at < 1,000 policies and < 500 claims.
For small books, use ConditionalFreqSev — it estimates a single parameter γ from the severity GLM refitted with N as a covariate, which is more stable with less data.
Performance
Benchmarked against an independent two-part model (Poisson GLM × Gamma GLM, pure premium = E[N] × E[S]) on 12,000 synthetic UK motor policies (8,437 train / 3,563 test) with known positive freq-sev dependence via a latent risk score. Results from benchmarks/benchmark_insurance_frequency_severity.py run 2026-03-16.
| Metric | Independent model | Sarmanov copula | Change |
|---|---|---|---|
| Pure premium MAE vs oracle | 14.8405 | 10.6010 | -28.6% |
| Portfolio total premium bias | +22.95% | -6.77% | -16.2pp |
| Estimated Spearman rho | 0.000 | -0.015 | — |
| Fit time (seconds) | 0.105 | 0.128 | +21% |
Correction factors: mean 0.943, p10 0.939, p90 0.950. High-risk decile correction: 0.952 (-4.8% premium reduction vs independence). Low-risk decile: 0.940.
Note on omega sign: The benchmark DGP uses a positive latent risk score (z) to drive both higher frequency and severity. The fitted omega is -1.14 (Spearman rho ≈ -0.015), meaning the library detected negative-leaning dependence on this sample. The 95% CI on omega is (-1.61, +0.30), which includes zero — independence is not rejected at 5%. Despite this, the correction produces a 28.6% MAE improvement and reduces portfolio bias from +22.95% to -6.77%. This is explained by the correction absorbing some of the marginal model error: the GLMs slightly overpredict frequency for high-latent-risk policies, and the copula correction partially offsets this.
The canonical use case is a portfolio where omega is positive and statistically significant. Use DependenceTest before fitting to check whether the correction is supported by the data.
When to use: Personal lines motor or property books where DependenceTest indicates positive and statistically significant freq-sev dependence. The correction is analytical (closed-form, no simulation at scoring time).
When NOT to use: When you cannot reject independence (DependenceTest p-value > 0.05). Also when the book has very few claims (< 500) — the omega estimate will be too noisy.
Databricks Notebook
A validation notebook with known-DGP omega recovery and premium comparison is at notebooks/databricks_validation.py. A broader benchmark notebook is at notebooks/benchmark.py. Both run on Databricks serverless compute with no external data required.
Limitations
- Stable omega estimation requires approximately 20,000 policyholder-years with at least 2,000 observed claims. Smaller books produce wide confidence intervals on the dependence parameter. Always run
DependenceTestbefore fitting — if independence cannot be rejected (p > 0.05), do not apply corrections. - The Sarmanov IFM estimator uses only policies with at least one observed claim. Zero-claim policies contribute no information about the frequency-severity dependence parameter. If your zero-claim rate is above 90%, the effective estimation sample is very small relative to total book size.
- Per-policy analytical corrections are available only with
copula="sarmanov". The Gaussian and FGM copulas return a single portfolio-average correction factor. If heterogeneous corrections by risk segment matter, Sarmanov is the only option. - The library wraps statsmodels GLM objects. Non-statsmodels models may work via
.predict()but kernel parameters are inferred from statsmodels-specific attributes. For non-statsmodels GLMs, pass parameter dictionaries directly and validate the kernel construction manually. - The premium correction
E[N×S] / (E[N] × E[S])is computed at scoring time and not recalibrated as the portfolio evolves. If the NCD suppression effect changes (e.g., the NCD scale is restructured), re-estimate omega on recent data. Stale corrections can move in the wrong direction.
References
- Vernic, Bolancé, Alemany (2022). Sarmanov distribution for modeling dependence between the frequency and the average severity of insurance claims. Insurance: Mathematics and Economics, 102, 111–125.
- Garrido, Genest, Schulz (2016). Generalized linear models for dependent frequency and severity of insurance claims. IME, 70, 205–215.
- Lee, Shi (2019). A dependent frequency-severity approach to modeling longitudinal insurance claims. IME, 87, 115–129.
- Blier-Wong (2026). arXiv:2601.09016. Spearman rho range for Sarmanov copulas.
- Czado, Kastenmeier, Brechmann, Min (2012). A mixed copula model for insurance claims and claim sizes. Scandinavian Actuarial Journal, 4, 278–305.
Built by Burning Cost. MIT licence.
Related Libraries
| Library | What it does |
|---|---|
| insurance-dispersion | Double GLM for covariate-driven dispersion — models heterogeneous variance within each component |
| insurance-severity | Heavy-tail severity with composite Pareto models and ILFs — use for the severity component when tails matter |
| insurance-quantile | Quantile GBM for tail risk — non-parametric complement when the full distributional structure is uncertain |
Community
- Questions? Start a Discussion
- Found a bug? Open an Issue
- Blog & tutorials: burning-cost.github.io
If this library saves you time, a star on GitHub helps others find it.
Part of the Burning Cost Toolkit
Open-source Python libraries for UK personal lines insurance pricing. Browse all libraries
| Library | Description |
|---|---|
| insurance-conformal | Distribution-free prediction intervals — FrequencySeverityConformal provides joint f/s coverage guarantees |
| insurance-credibility | Bühlmann-Straub credibility — blends frequency and severity estimates for thin segments |
| insurance-causal | DML causal inference — establishes whether frequency-severity dependence is causal or driven by observed confounders |
| insurance-monitoring | Model drift detection — monitors frequency and severity component calibration separately over time |
| insurance-governance | Model validation and MRM governance — produces the sign-off pack for joint frequency-severity models |
Training Course
Want structured learning? Insurance Pricing in Python is a 12-module course covering the full pricing workflow. Module 4 covers frequency-severity modelling — Poisson/Gamma split, Sarmanov copulas, and joint prediction intervals. £97 one-time.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file insurance_frequency_severity-0.2.8.tar.gz.
File metadata
- Download URL: insurance_frequency_severity-0.2.8.tar.gz
- Upload date:
- Size: 269.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d9b738d6b4e41ad59c183b5d778277614950eeb7c3cb665e35f4ae156fcd6cd6
|
|
| MD5 |
27de8917fc6720d29ccbe9288e03c0d0
|
|
| BLAKE2b-256 |
179b9c8e2a1fe24130cb83c91751e211ad9f1e0975551445261770e6683a28e4
|
File details
Details for the file insurance_frequency_severity-0.2.8-py3-none-any.whl.
File metadata
- Download URL: insurance_frequency_severity-0.2.8-py3-none-any.whl
- Upload date:
- Size: 66.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
502ea1f64769dd2b79f7f768058a73731e450b38f2f02830c27a4138f2ecc116
|
|
| MD5 |
9f6c0f94d061333eb94269102ce0bb9c
|
|
| BLAKE2b-256 |
7407fca82b0814c0a0512d33eac446f9528a0662e9a1a4d9a0777372dda4842a
|