Skip to main content

A Library for Uncertainty Quantification.

Project description

A Library for Uncertainty Quantification

Proper estimation of predictive uncertainty is fundamental in applications that involve critical decisions. It can be used to assess reliability of model predictions, trigger human intervention, or decide whether a model can be safely deployed in the wild.

Fortuna is a library for uncertainty quantification that makes it easy for users to run benchmarks and bring uncertainty to production systems. Fortuna provides calibration and conformal methods starting from pre-trained models written in any framework, and it further supports several Bayesian inference methods starting from deep learning models written in Flax. The language is designed to be intuitive for practitioners unfamiliar with uncertainty quantification, and is highly configurable.

Check the documentation for a quickstart, examples and references.

Usage modes

Fortuna offers three different usage modes: From uncertainty estimates, From model outputs and From Flax models. These serve users according to the constraints dictated by their own applications. Their pipelines are depicted in the following figure, each starting from one of the green panels.

https://github.com/awslabs/fortuna/raw/main/docs/source/_static/pipeline.png

From uncertainty estimates

Starting from uncertainty estimates has minimal compatibility requirements and it is the quickest level of interaction with the library. This usage mode offers conformal prediction methods for both classification and regression. These take uncertainty estimates in input, and return rigorous sets of predictions that retain a user-given level of probability. In one-dimensional regression tasks, conformal sets may be thought as calibrated versions of confidence or credible intervals.

Mind that if the uncertainty estimates that you provide in inputs are inaccurate, conformal sets might be large and unusable. For this reason, if your application allows it, please consider the From model outputs and From Flax models usage modes.

Example. Suppose you want to calibrate credible intervals with coverage error error, each corresponding to a different test input variable. We assume that credible intervals are passed as arrays of lower and upper bounds, respectively test_lower_bounds and test_upper_bounds. You also have lower and upper bounds of credible intervals computed for several validation inputs, respectively val_lower_bounds and val_upper_bounds. The corresponding array of validation targets is denoted by val_targets. The following code produces conformal prediction intervals, i.e. calibrated versions of you test credible intervals.

from fortuna.conformal.regression import QuantileConformalRegressor
conformal_intervals = QuantileConformalRegressor().conformal_interval(
     val_lower_bounds=val_lower_bounds, val_upper_bounds=val_upper_bounds,
     test_lower_bounds=test_lower_bounds, test_upper_bounds=test_upper_bounds,
     val_targets=val_targets, error=error)

From model outputs

Starting from model outputs assumes you have already trained a model in some framework, and arrive to Fortuna with model outputs in numpy.ndarray format for each input data point. This usage mode allows you to calibrate your model outputs, estimate uncertainty, compute metrics and obtain conformal sets.

Compared to the From uncertainty estimates usage mode, this one offers better control, as it can make sure uncertainty estimates have been appropriately calibrated. However, if the model had been trained with classical methods, the resulting quantification of model (a.k.a. epistemic) uncertainty may be poor. To mitigate this problem, please consider the From Flax models usage mode.

Example. Suppose you have validation and test model outputs, respectively val_outputs and test_outputs. Furthermore, you have some arrays of validation and target variables, respectively val_targets and test_targets. The following code provides a minimal classification example to get calibrated predictive entropy estimates.

from fortuna.calib_model import CalibClassifier
calib_model = CalibClassifier()
status = calib_model.calibrate(outputs=val_outputs, targets=val_targets)
test_entropies = calib_model.predictive.entropy(outputs=test_outputs)

From Flax models

Starting from Flax models has higher compatibility requirements than the From uncertainty estimates and From model outputs usage modes, as it requires deep learning models written in Flax. However, it enables you to replace standard model training with scalable Bayesian inference procedures, which may significantly improve the quantification of predictive uncertainty.

Example. Suppose you have a Flax classification deep learning model model from inputs to logits, with output dimension given by output_dim. Furthermore, you have some training, validation and calibration TensorFlow data loader train_data_loader, val_data_loader and test_data_loader, respectively. The following code provides a minimal classification example to get calibrated probability estimates.

from fortuna.data import DataLoader
train_data_loader = DataLoader.from_tensorflow_data_loader(train_data_loader)
calib_data_loader = DataLoader.from_tensorflow_data_loader(val_data_loader)
test_data_loader = DataLoader.from_tensorflow_data_loader(test_data_loader)

from fortuna.prob_model import ProbClassifier
prob_model = ProbClassifier(model=model)
status = prob_model.train(train_data_loader=train_data_loader, calib_data_loader=calib_data_loader)
test_means = prob_model.predictive.mean(inputs_loader=test_data_loader.to_inputs_loader())

Installation

NOTE: Before installing Fortuna, you are required to install JAX in your virtual environment.

You can install Fortuna by typing

pip install aws-fortuna

Alternatively, you can build the package using Poetry.

Examples

Several usage examples are found in the /examples directory.

Contributing

If you wish to contribute to the project, please refer to our contribution guidelines.

License

This project is licensed under the Apache-2.0 License. See LICENSE for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aws_fortuna-0.1.2.tar.gz (228.2 kB view details)

Uploaded Source

Built Distribution

aws_fortuna-0.1.2-py3-none-any.whl (161.9 kB view details)

Uploaded Python 3

File details

Details for the file aws_fortuna-0.1.2.tar.gz.

File metadata

  • Download URL: aws_fortuna-0.1.2.tar.gz
  • Upload date:
  • Size: 228.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.8.8 Darwin/21.4.0

File hashes

Hashes for aws_fortuna-0.1.2.tar.gz
Algorithm Hash digest
SHA256 b99bf17d31fb1b5f320e5ee19af474f1d5e2bfbbb4ecad02f368e53b1df79212
MD5 f11806fb2b22018f05a991dc7fed11ca
BLAKE2b-256 9801c15cfeb24367e3a58c59102aa1e4ebd40d1cd5b1c4577b951cbc59248357

See more details on using hashes here.

File details

Details for the file aws_fortuna-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: aws_fortuna-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 161.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.8.8 Darwin/21.4.0

File hashes

Hashes for aws_fortuna-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d18592bdcbdbc4b33381b1b75cbfc966893bcccdf78c4f45569d8b7b75ca4685
MD5 8864c49142e1bb192cfcd0f120af42d7
BLAKE2b-256 cf2aa75db7cf03e9236029757c3396d60b70416816483f96b0afa7080d107b9e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page