Skip to main content

A Library for Uncertainty Quantification.

Project description

PyPI - Status PyPI - Downloads PyPI - Version License Documentation Status

A Library for Uncertainty Quantification

Proper estimation of predictive uncertainty is fundamental in applications that involve critical decisions. Uncertainty can be used to assess reliability of model predictions, trigger human intervention, or decide whether a model can be safely deployed in the wild.

Fortuna is a library for uncertainty quantification that makes it easy for users to run benchmarks and bring uncertainty to production systems. Fortuna provides calibration and conformal methods starting from pre-trained models written in any framework, and it further supports several Bayesian inference methods starting from deep learning models written in Flax. The language is designed to be intuitive for practitioners unfamiliar with uncertainty quantification, and is highly configurable.

Check the documentation for a quickstart, examples and references.

Usage modes

Fortuna offers three different usage modes: From uncertainty estimates, From model outputs and From Flax models. These serve users according to the constraints dictated by their own applications. Their pipelines are depicted in the following figure, each starting from one of the green panels.

https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png

From uncertainty estimates

Starting from uncertainty estimates has minimal compatibility requirements and it is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression. These take uncertainty estimates in input, and return rigorous sets of predictions that retain a user-given level of probability. In one-dimensional regression tasks, conformal sets may be thought as calibrated versions of confidence or credible intervals.

Mind that if the uncertainty estimates that you provide in inputs are inaccurate, conformal sets might be large and unusable. For this reason, if your application allows it, please consider the From model outputs and From Flax models usage modes.

Example. Suppose you want to calibrate credible intervals with coverage error error, each corresponding to a different test input variable. We assume that credible intervals are passed as arrays of lower and upper bounds, respectively test_lower_bounds and test_upper_bounds. You also have lower and upper bounds of credible intervals computed for several validation inputs, respectively val_lower_bounds and val_upper_bounds. The corresponding array of validation targets is denoted by val_targets. The following code produces conformal prediction intervals, i.e. calibrated versions of you test credible intervals.

from fortuna.conformal.regression import QuantileConformalRegressor
conformal_intervals = QuantileConformalRegressor().conformal_interval(
     val_lower_bounds=val_lower_bounds, val_upper_bounds=val_upper_bounds,
     test_lower_bounds=test_lower_bounds, test_upper_bounds=test_upper_bounds,
     val_targets=val_targets, error=error)

From model outputs

Starting from model outputs assumes you have already trained a model in some framework, and arrive to Fortuna with model outputs in numpy.ndarray format for each input data point. This usage mode allows you to calibrate your model outputs, estimate uncertainty, compute metrics and obtain conformal sets.

Compared to the From uncertainty estimates usage mode, this one offers better control, as it can make sure uncertainty estimates have been appropriately calibrated. However, if the model had been trained with classical methods, the resulting quantification of model (a.k.a. epistemic) uncertainty may be poor. To mitigate this problem, please consider the From Flax models usage mode.

Example. Suppose you have validation and test model outputs, respectively val_outputs and test_outputs. Furthermore, you have some arrays of validation and target variables, respectively val_targets and test_targets. The following code provides a minimal classification example to get calibrated predictive entropy estimates.

from fortuna.calib_model import CalibClassifier
calib_model = CalibClassifier()
status = calib_model.calibrate(outputs=val_outputs, targets=val_targets)
test_entropies = calib_model.predictive.entropy(outputs=test_outputs)

From Flax models

Starting from Flax models has higher compatibility requirements than the From uncertainty estimates and From model outputs usage modes, as it requires deep learning models written in Flax. However, it enables you to replace standard model training with scalable Bayesian inference procedures, which may significantly improve the quantification of predictive uncertainty.

Example. Suppose you have a Flax classification deep learning model model from inputs to logits, with output dimension given by output_dim. Furthermore, you have some training, validation and calibration TensorFlow data loader train_data_loader, val_data_loader and test_data_loader, respectively. The following code provides a minimal classification example to get calibrated probability estimates.

from fortuna.data import DataLoader
train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader)
calib_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader)
test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)

from fortuna.prob_model import ProbClassifier
prob_model = ProbClassifier(model=model)
status = prob_model.train(train_data_loader=train_data_loader, calib_data_loader=calib_data_loader)
test_means = prob_model.predictive.mean(inputs_loader=test_data_loader.to_inputs_loader())

Installation

NOTE: Before installing Fortuna, you are required to install JAX in your virtual environment.

You can install Fortuna by typing

pip install aws-fortuna

Alternatively, you can build the package using Poetry. If you choose to pursue this way, first install Poetry and add it to your PATH (see here). Then type

poetry install

All the dependecies will be installed at their required versions. If you also want to install the optional Sphinx dependencies to build the documentation, add the flag -E docs to the command above. Finally, you can either access the virtualenv that Poetry created by typing poetry shell, or execute commands within the virtualenv using the run command, e.g. poetry run python.

Examples

Several usage examples are found in the /examples directory.

Material

Contributing

If you wish to contribute to the project, please refer to our contribution guidelines.

License

This project is licensed under the Apache-2.0 License. See LICENSE for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws_fortuna-0.1.8.tar.gz (271.0 kB view details)

Uploaded Source

Built Distribution

aws_fortuna-0.1.8-py3-none-any.whl (170.4 kB view details)

Uploaded Python 3

File details

Details for the file aws_fortuna-0.1.8.tar.gz.

File metadata

  • Download URL: aws_fortuna-0.1.8.tar.gz
  • Upload date:
  • Size: 271.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.3.1 CPython/3.11.1 Darwin/21.6.0

File hashes

Hashes for aws_fortuna-0.1.8.tar.gz
Algorithm Hash digest
SHA256 13c5c14c449d61f66181b5e945e2fb506e7dabedb1b47c912a9cc71564eae74f
MD5 5434b81bf43e35f1bf3979ae4317cb57
BLAKE2b-256 da7b9a2fb8233a87284d594ab622b3fb8371d7d7663930a2281fc313bb55a622

See more details on using hashes here.

File details

Details for the file aws_fortuna-0.1.8-py3-none-any.whl.

File metadata

  • Download URL: aws_fortuna-0.1.8-py3-none-any.whl
  • Upload date:
  • Size: 170.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.3.1 CPython/3.11.1 Darwin/21.6.0

File hashes

Hashes for aws_fortuna-0.1.8-py3-none-any.whl
Algorithm Hash digest
SHA256 41e1c5c5ae0c13f73cdf61da392dc1acf1f24aad9e4261ca9c8f6c90b1b21d5c
MD5 5e5fe9bd4e588b6c788ceeab157aaf72
BLAKE2b-256 8a4d7919c5071f5b640c508faf446d56fab5976c11c39b1738a5591adadcf2be

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page