Skip to main content

UNIQUE is a Python package for benchmarking uncertainty estimation and quantification methods for Machine Learning models predictions.

Project description

UNIQUE Logo
UNcertaInty QUantification bEnchmark: a Python library for benchmarking uncertainty estimation and quantification methods for Machine Learning models predictions.

Python PyPI version Conda version License chemRxiv PyPI downloads Conda downloads Docs Build

Introduction

UNIQUE provides methods for quantifying and evaluating the uncertainty of Machine Learning (ML) models predictions. The library allows to combine and benchmark multiple uncertainty quantification (UQ) methods simultaneously, generates intuitive visualizations, evaluates the goodness of the UQ methods against established metrics, and in general enables the users to get a comprehensive overview of their ML model's performances from an uncertainty quantification perspective.

UNIQUE is a model-agnostic tool, meaning that it does not depend on any specific ML model building platform or provides any ML model training functionality. It is lightweight, because it only requires the user to input their model's inputs and predictions.

UNIQUE High Level Schema
High-level schema of UNIQUE's components.

Installation

Python PyPI version Conda version PyPI downloads Conda downloads Build

UNIQUE is currently compatible with Python 3.8 through 3.12.1. To install the latest release and use the package as is, run the following in a compatible environment of choice:

pip install unique-uncertainty

or:

conda install -c conda-forge unique-uncertainty
# mamba install -c conda-forge unique-uncertainty

Check out the docs for more installation instructions.

Getting Started

Check out the docs for a complete set of instructions on how to prepare your data and the possible configurations offered by UNIQUE.

Usage

Finally, once the data and configuration files have been prepared, you can run UNIQUE in the following way:

from unique import Pipeline

# Prepare UNIQUE pipeline
pipeline = Pipeline.from_config("/path/to/config.yaml")

# Run UNIQUE pipeline
uq_methods_outputs, uq_evaluation_outputs = pipeline.fit()
# Returns: (Dict[str, np.ndarray], Dict[str, pd.DataFrame])

Fitting the Pipeline will return two dictionaries:

  • uq_methods_outputs: contains each UQ method's name (as in "UQ_Method_Name[Input_Name(s)]") and computed UQ values.
  • uq_evaluation_outputs: contains, for each evaluation type (ranking-based, proper scoring rules, and calibration-based), the evaluation metrics outputs for all the corresponding UQ methods organized in pd.DataFrame.

Additionally, UNIQUE also generates graphical outputs in the form of tables and evaluation plots (if display_outputs is enabled and the code is running in a JupyterNotebook cell).

Examples

For more hands-on examples and detailed usage, check out some of the examples in the docs.

Deep Dive

Check out the docs for an in-depth overview of UNIQUE's concepts, functionalities, outputs, and references.

Contributing

Any and all contributions and suggestions from the community are more than welcome and highly appreciated. If you wish to help us out in making UNIQUE even better, please check out our contributing guidelines.

Please note that we have a Code of Conduct in place to ensure a positive and inclusive community environment. By participating in this project, you agree to abide by its terms.

License

License

UNIQUE is licensed under the BSD 3-Clause License. See the LICENSE file.

Cite Us

chemRxiv

If you find UNIQUE helpful for your work and/or research, please consider citing our work:

@misc{lanini2024unique,
  title={UNIQUE: A Framework for Uncertainty Quantification Benchmarking},
  author={Lanini, Jessica and Huynh, Minh Tam Davide and Scebba, Gaetano and Schneider, Nadine and Rodr{\'\i}guez-P{\'e}rez, Raquel},
  year={2024},
  doi={https://doi.org/10.26434/chemrxiv-2024-fmbgk},
}

Contacts & Acknowledgements

For any questions or further details about the project, please get in touch with any of the following contacts:

Novartis Logo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

unique_uncertainty-0.2.2.tar.gz (4.3 MB view details)

Uploaded Source

Built Distribution

unique_uncertainty-0.2.2-py3-none-any.whl (87.5 kB view details)

Uploaded Python 3

File details

Details for the file unique_uncertainty-0.2.2.tar.gz.

File metadata

  • Download URL: unique_uncertainty-0.2.2.tar.gz
  • Upload date:
  • Size: 4.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for unique_uncertainty-0.2.2.tar.gz
Algorithm Hash digest
SHA256 0709f5d11366a9eb47f1dc6a0dc0216e5bd35858fe1528076f1c0a5bb10935d7
MD5 15a5e0b8335c85db4a43ec046091119d
BLAKE2b-256 70824cbe842af5e3d13efd989398d6b4461fecc2bc87b2e8e554fdf4494638cb

See more details on using hashes here.

File details

Details for the file unique_uncertainty-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for unique_uncertainty-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f315eac6806d011194922a51115ebc3630e9a80be03f30f0e8e79a7ea6ef7b2d
MD5 1473d98e2c1855d054e49d596885f068
BLAKE2b-256 cd1bebad3a820a64ba7e8bbec40d8a1bc7f484ef1316d20e716218074d6e24a1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page