Tools for creating and working with aggregate probability distributions.
Reason this release was yanked:
Bad configuration
Project description
aggregate: a powerful aggregate distribution modeling library in Python
Purpose
aggregate solves insurance, risk management, and actuarial problems using realistic models that reflect underlying frequency and severity. It delivers the speed and accuracy of parametric distributions to situations that usually require simulation, making it as easy to work with an aggregate (compound) probability distribution as the lognormal. aggregate includes an expressive language called DecL to describe aggregate distributions and is implemented in Python under an open source BSD-license.
Documentation
Where to get it
Installation
pip install aggregate
Version History
0.17.0 (July 2023)
more added as a proper method
Fixed debugfile in parser.py which stops installation if not None (need to enure the directory exists)
Fixed build and MANIFEST to remove build warning
parser: semicolon no longer mapped to newline; it is now used to provide hints notes
recommend_bucket uses p=max(p, 1-1e-8) if limit=inf. Default increased from 0.999 to 0.99999 based on examples; works well for limited severity but not well for unlimited severity.
Implemented calculation hints in note strings. Format is k=v; pairs; k bs, log2, padding, recommend_p, normalize are recognized. If present they are used if no arguments are passed explicitly to build.
Added interpreter_test_suite() to Underwriter to run the test suite
Added test_suite_file to Underwriter to return Path to test_suite.agg` file
Layers, attachments, and the reinsurance tower can now be ranges, [s:f:j] syntax
0.16.1 (July 2023)
IDs can now include dashes: Line-A is a legitimate date
Include templates and test-cases.agg file in the distribution
Fixed mixed severity / limit profile interaction. Mixtures now work with exposure defined by losses and premium (as opposed to just claim count), correctly account for excess layers (which requires re-weighting the mixture components). Involves fixing the ground up severity and using it to adjust weights first. Then, by layer, figure the severity and convert exposure to claim count if necessary. Cases where there is no loss in the layer (high layer from low mean / low vol componet) replace by zero. Use logging level 20 for more details.
Added more function to Portfolio, Aggregate and Underwriter classes. Given a regex it returns all methods and attributes matching. It tries to call a method with no arguments and reports the answer. more is defined in utilities and can be applied to any object.
Moved work of qt from utilities into Aggregate` (where it belongs). Retained qt for backwards compatibility.
Parser: power <- atom ** factor to power <- factor ** factor to allow (1/2)**(3/4)
random` module renamed `random_agg to avoid conflict with Python random
Implemented exact moments for exponential (special case of gamma) because MED is a common distribution and computing analytic moments is very time consuming for large mixtures.
Added ZM and ZT examples to test_cases.agg; adjusted Portfolio examples to be on one line so they run through interpreter_file tests.
0.16.0 (June 2023)
Implemented ZM and ZT distributions using decorators!
Added panjer_ab to Frequency, reports a and b values, p_k = (a + b / k) p_{k-1}. These values can be tested by computing implied a and b values from r_k = k p_k / p_{k-1} = ak + b; diff r_k = a and b is an easy computation.
Added freq_dist(log2) option to Freq to return the frequency distribution stand-alone
Added negbin frequency where freq_a equals the variance multiplier
0.15.0 (June 2023)
Added pygments lexer for decl (called agg, agregate, dec, or decl)
Added to the documentation
using pygments style in pprint_ex html mode
removed old setup scripts and files and stack.md
0.14.1 (June 2023)
Added scripts.py for entry points
Updated .readthedocs.yaml to build from toml not requirements.txt
Fixes to documentation
Portfolio.tvar_threshold updated to use scipy.optimize.bisect
Added kaplan_meier to utilities to compute product limit estimator survival function from censored data. This applies to a loss listing with open (censored) and closed claims.
doc to docs []
Enhanced make_var_tvar for cases where all probabilities are equal, using linspace rather than cumsum.
0.13.0 (June 4, 2023)
Updated Portfolio.price to implement allocation='linear' and allow a dictionary of distortions
ordered='strict' default for Portfolio.calibrate_distortions
Pentagon can return a namedtuple and solve does not return a dataframe (it has no return value)
Added random.py module to hold random state. Incorporated into
Utilities: Iman Conover (ic_noise permuation) and rearrangement algorithms
Portfolio sample
Aggregate sample
Spectral bagged_distortion
Portfolio added n_units property
Portfolio simplified __repr__
Added block_iman_conover to utilitiles. Note tester code in the documentation. Very Nice! 😁😁😁
New VaR, quantile and TVaR functions: 1000x speedup and more accurate. Builder function in utilities.
pyproject.toml project specification, updated build process, now creates whl file rather than egg file.
0.12.0 (May 2023)
add_exa_sample becomes method of Portfolio
Added create_from_sample method to Portfolio
Added bodoff method to compute layer capital allocation to Portfolio
Improved validation error reporting
extensions.samples module deleted
Added spectral.approx_ccoc to create a ct approx to the CCoC distortion
qdp moved to utilities (describe plus some quantiles)
Added Pentagon class in extensions
Earlier versions
See github commit notes.
Version numbers follow semantic versioning, MAJOR.MINOR.PATCH:
MAJOR version changes with incompatible API changes.
MINOR version changes with added functionality in a backwards compatible manner.
PATCH version changes with backwards compatible bug fixes.
Getting started
To get started, import build. It provides easy access to all functionality.
Here is a model of the sum of three dice rolls. The DataFrame describe compares exact mean, CV and skewness with the aggregate computation for the frequency, severity, and aggregate components. Common statistical functions like the cdf and quantile function are built-in. The whole probability distribution is available in a.density_df.
from aggregate import build, qd a = build('agg Dice dfreq [3] dsev [1:6]') qd(a)
>>> E[X] Est E[X] Err E[X] CV(X) Est CV(X) Err CV(X) Skew(X) Est Skew(X) >>> X >>> Freq 3 0 >>> Sev 3.5 3.5 0 0.48795 0.48795 -3.3307e-16 0 2.8529e-15 >>> Agg 10.5 10.5 -3.3307e-16 0.28172 0.28172 -8.6597e-15 0 -1.5813e-13
print(f'\nProbability sum < 12 = {a.cdf(12):.3f}\nMedian = {a.q(0.5):.0f}')
>>> Probability sum < 12 = 0.741 >>> Median = 10
aggregate can use any scipy.stats continuous random variable as a severity, and supports all common frequency distributions. Here is a compound-Poisson with lognormal severity, mean 50 and cv 2.
a = build('agg Example 10 claims sev lognorm 50 cv 2 poisson') qd(a)
>>> E[X] Est E[X] Err E[X] CV(X) Est CV(X) Err CV(X) Skew(X) Est Skew(X) >>> X >>> Freq 10 0.31623 0.31623 >>> Sev 50 49.888 -0.0022464 2 1.9314 -0.034314 14 9.1099 >>> Agg 500 498.27 -0.0034695 0.70711 0.68235 -0.035007 3.5355 2.2421
# cdf and quantiles print(f'Pr(X<=500)={a.cdf(500):.3f}\n0.99 quantile={a.q(0.99)}')
>>> Pr(X<=500)=0.611 >>> 0.99 quantile=1727.125
See the documentation for more examples.
Dependencies
See requirements.txt.
Install from source
git clone --no-single-branch --depth 50 https://github.com/mynl/aggregate.git . git checkout --force origin/master git clean -d -f -f python -mvirtualenv ./venv # ./venv/Scripts on Windows ./venv/bin/python -m pip install --exists-action=w --no-cache-dir -r requirements.txt # to create help files ./venv/bin/python -m pip install --upgrade --no-cache-dir pip setuptools<58.3.0 ./venv/bin/python -m pip install --upgrade --no-cache-dir pillow mock==1.0.1 alabaster>=0.7,<0.8,!=0.7.5 commonmark==0.9.1 recommonmark==0.5.0 sphinx<2 sphinx-rtd-theme<0.5 readthedocs-sphinx-ext<2.3 jinja2<3.1.0
Note: options from readthedocs.org script.
License
BSD 3 licence.
Help and contributions
Limited help available. Email me at help@aggregate.capital.
All contributions, bug reports, bug fixes, documentation improvements, enhancements and ideas are welcome. Create a pull request on github and/or email me.
Social media: https://www.reddit.com/r/AggregateDistribution/.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for aggregate-0.17.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | abe563b749ec28032513d13cd7dfe7954c0e528f1a7f073d758e3957b080cb30 |
|
MD5 | 2983af3ed197c5ce92bc4d57930f0615 |
|
BLAKE2b-256 | bcc2dcfba6b2b5813bb44837029209ba561b0ab57dfbbf42d1bb510e283b6a10 |