Vine copula synthetic insurance portfolio data generator
Project description
insurance-synthetic
Generate synthetic insurance portfolio data using vine copulas.
The problem
UK pricing teams frequently need realistic insurance data they cannot actually share:
- Vendor demos require a motor portfolio with the right marginals and correlations, but you can't hand over policyholder data
- Model benchmarking across teams needs a common dataset that doesn't exist
- Privacy regulations mean actuarial science students and researchers rarely see real claims data
Generic synthetic data tools (SDV, CTGAN, TVAE) generate plausible-looking rows, but they don't understand insurance structure. They produce synthetic portfolios where claim counts are independent of exposure, young drivers don't correlate with zero NCD, and severity distributions have the wrong tail shape. A model trained on that synthetic data won't generalise to real portfolios.
This library solves that.
What it does
insurance-synthetic generates synthetic portfolios using R-vine copulas (via pyvinecopulib):
- Marginal fitting: Each column gets the best-fitting marginal by AIC — Gamma, LogNormal, Poisson, NegBin, Normal, Beta, or categorical encoding
- PIT transform: Every column is mapped to uniform [0,1] via its CDF
- Vine copula: Pairwise dependencies (including tail dependence) are captured by a fitted R-vine
- Generation: Sample from the vine, invert through marginals, then regenerate frequency as
Poisson(λ × exposure)to preserve the exposure relationship
The vine copula matters for insurance. A Gaussian copula misses tail dependence — the fact that young driver + high vehicle group + zero NCD is more dangerous than the marginal risks suggest. Clayton and Gumbel copulas capture this. Pyvinecopulib selects the best bivariate family for each pair automatically.
Installation
pip install insurance-synthetic
# With TSTR fidelity scoring (requires CatBoost):
pip install insurance-synthetic[fidelity]
Requires Python 3.10+.
Quick start
import polars as pl
from insurance_synthetic import InsuranceSynthesizer, SyntheticFidelityReport
# Fit on your real portfolio
synth = InsuranceSynthesizer(random_state=42)
synth.fit(
real_df,
exposure_col='exposure',
frequency_col='claim_count',
severity_col='claim_amount',
)
synth.summary()
# Generate 50,000 synthetic policies
synthetic_df = synth.generate(50_000, constraints={
'driver_age': (17, 90),
'ncd_years': (0, 25),
'exposure': (0.01, 1.0),
})
# Measure fidelity
report = SyntheticFidelityReport(
real_df, synthetic_df,
exposure_col='exposure',
target_col='claim_count',
)
print(report.to_markdown())
UK motor schema
The library ships a pre-built column specification for a UK private motor portfolio:
from insurance_synthetic import uk_motor_schema
schema = uk_motor_schema()
# {
# 'columns': [ColumnSpec(name='driver_age', dtype='int', min_val=17, max_val=90), ...],
# 'constraints': {'driver_age': (17, 90), 'exposure': (0.01, 1.0), ...},
# 'description': 'UK private motor portfolio schema. ...'
# }
Columns: driver_age, vehicle_age, vehicle_group, region, ncd_years, cover_type, payment_method, annual_mileage, exposure, claim_count, claim_amount.
Fidelity metrics
SyntheticFidelityReport measures synthesis quality at three levels:
| Metric | What it checks | Target |
|---|---|---|
| KS statistic | Marginal distribution per column | < 0.05 is excellent |
| Wasserstein distance | Marginal shape (normalised by std) | < 0.1 is good |
| Spearman Frobenius | Correlation matrix distance | Low |
| TVaR ratio | Tail risk preservation at 99th pct | ≈ 1.0 |
| Exposure-weighted KS | Marginal fidelity weighted by policy year | < 0.05 is excellent |
| TSTR Gini gap | Train-on-Synthetic, Test-on-Real | ≈ 0.0 |
The TSTR Gini gap is the most demanding test: if a CatBoost model trained on synthetic data scores within a small margin of one trained on real data, the synthetic portfolio is genuinely useful for pricing model development.
# Requires insurance-synthetic[fidelity]
gini_gap = report.tstr_score(test_fraction=0.2, catboost_iterations=200)
print(f"TSTR Gini gap: {gini_gap:.4f}") # target: near 0
API reference
InsuranceSynthesizer
InsuranceSynthesizer(
method='vine', # 'vine' | 'gaussian'
marginals='auto', # 'auto' | dict of column -> scipy family name
family_set='all', # pyvinecopulib family set
trunc_lvl=None, # vine truncation level (None = full)
n_threads=1,
random_state=None,
)
.fit(df, exposure_col, frequency_col, severity_col, categorical_cols, discrete_cols)
.generate(n, constraints, max_resample_attempts) → pl.DataFrame
.summary() → str
.get_params() → dict
fit_marginal
Standalone function for fitting a single column:
from insurance_synthetic import fit_marginal
m = fit_marginal(series, family='auto') # or 'gamma', 'lognorm', 'norm', etc.
m.cdf(values) # → np.ndarray of probabilities
m.ppf(probs) # → np.ndarray of values
m.rvs(100) # → np.ndarray of random samples
m.family_name() # → 'gamma', 'lognorm', etc.
m.aic # → float
SyntheticFidelityReport
report = SyntheticFidelityReport(real_df, synthetic_df, exposure_col, target_col)
report.marginal_report() # pl.DataFrame — KS, Wasserstein per column
report.correlation_report() # pl.DataFrame — Spearman comparison
report.tvar_ratio(col, pct=0.99) # float
report.exposure_weighted_ks(col) # float
report.tstr_score(...) # float — requires [fidelity]
report.to_markdown() # str
Design decisions
Why vine copulas over CTGAN? CTGAN requires a GPU for reasonable training times, is a black box, and tends to overfit small portfolios. Vine copulas are fast, interpretable (you can inspect which bivariate families were selected), and scale well to 10k–1m row portfolios. They also have decades of actuarial literature behind them.
Why Polars? All our tooling is Polars-first. Pandas DataFrames are not accepted as input — if you have pandas, convert first with pl.from_pandas(df).
Why AIC marginal selection? AIC penalises model complexity, which matters with small portfolios (a few thousand rows) where BIC and likelihood ratio tests can be fooled. For large portfolios, the choice of information criterion rarely matters.
Why exposure-aware frequency generation? The standard approach of inverting through the frequency marginal ignores the exposure offset. A policy with 0.1 years of exposure and a policy with 1.0 years should have different expected claim counts even if they're otherwise identical. Our approach draws Poisson(λ × exposure) where λ is the fitted rate, preserving this relationship in the synthetic data.
Running tests
Tests run on Databricks — the package targets environments with pyvinecopulib installed. See the Databricks notebook in notebooks/ for a full end-to-end demo.
# On a machine with the dependencies installed:
pytest tests/ -v
Licence
MIT. See LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file insurance_synthetic-0.1.0.tar.gz.
File metadata
- Download URL: insurance_synthetic-0.1.0.tar.gz
- Upload date:
- Size: 141.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a10373fd479c047f856ebe26b732df023412b2632d915b9aef3c45082a6f398f
|
|
| MD5 |
0b20d58fa56322f79c016108678a0963
|
|
| BLAKE2b-256 |
c36ae5df7db6bd6f6238498af7e460a59d9071db635edb6bf5307c967869864f
|
File details
Details for the file insurance_synthetic-0.1.0-py3-none-any.whl.
File metadata
- Download URL: insurance_synthetic-0.1.0-py3-none-any.whl
- Upload date:
- Size: 26.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
40a98cd0f3547c70f9d621b0a0421506d866561b2705735445204dc54c21514c
|
|
| MD5 |
f328c6c489fcaeae50c31f28efd261bd
|
|
| BLAKE2b-256 |
e13027652e710da93d66899898da94f51fdcb50c102231587878dbaae6377d99
|