Skip to main content

A scikit-learn-compatible module for estimating prediction intervals.

Project description

GitHubActions Codecov ReadTheDocs License PythonVersion PyPi Conda Release Commits DOI

https://github.com/simai-ml/MAPIE/raw/master/doc/images/mapie_logo_nobg_cut.png

MAPIE - Model Agnostic Prediction Interval Estimator

MAPIE allows you to easily estimate prediction intervals (or prediction sets) using your favourite scikit-learn-compatible model for single-output regression or multi-class classification settings.

Prediction intervals output by MAPIE encompass both aleatoric and epistemic uncertainties and are backed by strong theoretical guarantees thanks to conformal prediction methods [1-7].

🔗 Requirements

Python 3.7+

MAPIE stands on the shoulders of giants.

Its only internal dependencies are scikit-learn and numpy=>1.21.

🛠 Installation

Install via pip:

$ pip install mapie

or via conda:

$ conda install -c conda-forge mapie

To install directly from the github repository :

$ pip install git+https://github.com/scikit-learn-contrib/MAPIE

⚡️ Quickstart

Let us start with a basic regression problem. Here, we generate one-dimensional noisy data that we fit with a linear model.

import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression

regressor = LinearRegression()
X, y = make_regression(n_samples=500, n_features=1, noise=20, random_state=59)

Since MAPIE is compliant with the standard scikit-learn API, we follow the standard sequential fit and predict process like any scikit-learn regressor. We set two values for alpha to estimate prediction intervals at approximately one and two standard deviations from the mean.

from mapie.regression import MapieRegressor
alpha = [0.05, 0.32]
mapie = MapieRegressor(regressor)
mapie.fit(X, y)
y_pred, y_pis = mapie.predict(X, alpha=alpha)

MAPIE returns a np.ndarray of shape (n_samples, 3, len(alpha)) giving the predictions, as well as the lower and upper bounds of the prediction intervals for the target quantile for each desired alpha value.

You can compute the coverage of your prediction intervals.

from mapie.metrics import regression_coverage_score
coverage_scores = [
    regression_coverage_score(y, y_pis[:, 0, i], y_pis[:, 1, i])
    for i, _ in enumerate(alpha)
]

The estimated prediction intervals can then be plotted as follows.

from matplotlib import pyplot as plt
plt.xlabel("x")
plt.ylabel("y")
plt.scatter(X, y, alpha=0.3)
plt.plot(X, y_pred, color="C1")
order = np.argsort(X[:, 0])
plt.plot(X[order], y_pis[order][:, 0, 1], color="C1", ls="--")
plt.plot(X[order], y_pis[order][:, 1, 1], color="C1", ls="--")
plt.fill_between(
    X[order].ravel(),
    y_pis[order][:, 0, 0].ravel(),
    y_pis[order][:, 1, 0].ravel(),
    alpha=0.2
)
plt.title(
    f"Target and effective coverages for "
    f"alpha={alpha[0]:.2f}: ({1-alpha[0]:.3f}, {coverage_scores[0]:.3f})\n"
    f"Target and effective coverages for "
    f"alpha={alpha[1]:.2f}: ({1-alpha[1]:.3f}, {coverage_scores[1]:.3f})"
)
plt.show()

The title of the plot compares the target coverages with the effective coverages. The target coverage, or the confidence interval, is the fraction of true labels lying in the prediction intervals that we aim to obtain for a given dataset. It is given by the alpha parameter defined in MapieRegressor, here equal to 0.05 and 0.32, thus giving target coverages of 0.95 and 0.68. The effective coverage is the actual fraction of true labels lying in the prediction intervals.

https://github.com/simai-ml/MAPIE/raw/master/doc/images/quickstart_1.png

📘 Documentation

The full documentation can be found on this link.

How does MAPIE work on regression ? It is basically based on cross-validation and relies on:

  • Conformity scores on the whole training set obtained by cross-validation,

  • Perturbed models generated during the cross-validation.

MAPIE then combines all these elements in a way that provides prediction intervals on new data with strong theoretical guarantees [1-2].

https://github.com/simai-ml/MAPIE/raw/master/doc/images/mapie_internals_regression.png

How does MAPIE work on classification ? It is based on the construction of calibrated conformity scores to estimate prediction sets and relies on:

  • Construction of a conformity score

  • Calibration of the conformity score on a calibration set not seen by the model during training

MAPIE then uses the calibrated conformity scores to estimate sets of labels associated with the desired coverage on new data with strong theoretical guarantees [3-4-5].

https://github.com/simai-ml/MAPIE/raw/master/doc/images/mapie_internals_classification.png

📝 Contributing

You are welcome to propose and contribute new ideas. We encourage you to open an issue so that we can align on the work to be done. It is generally a good idea to have a quick discussion before opening a pull request that is potentially out-of-scope. For more information on the contribution process, please go here.

🤝 Affiliations

MAPIE has been developed through a collaboration between Quantmetry, Michelin, ENS Paris-Saclay, and with the financial support from Région Ile de France and Confiance.ai.

Quantmetry Michelin ENS Confiance.ai IledeFrance

🔍 References

MAPIE methods belong to the field of conformal inference.

[1] Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. “Predictive inference with the jackknife+.” Ann. Statist., 49(1):486–507, February 2021.

[2] Byol Kim, Chen Xu, and Rina Foygel Barber. “Predictive Inference Is Free with the Jackknife+-after-Bootstrap.” 34th Conference on Neural Information Processing Systems (NeurIPS 2020).

[3] Mauricio Sadinle, Jing Lei, and Larry Wasserman. “Least Ambiguous Set-Valued Classifiers With Bounded Error Levels.” Journal of the American Statistical Association, 114:525, 223-234, 2019.

[4] Yaniv Romano, Matteo Sesia and Emmanuel J. Candès. “Classification with Valid and Adaptive Coverage.” NeurIPS 2020 (spotlight).

[5] Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan and Jitendra Malik. “Uncertainty Sets for Image Classifiers using Conformal Prediction.” International Conference on Learning Representations 2021.

[6] Yaniv Romano, Evan Patterson, Emmanuel J. Candès. “Conformalized Quantile Regression.” Advances in neural information processing systems 32 (2019).

[7] Chen Xu and Yao Xie. “Conformal Prediction Interval for Dynamic Time-Series.” International Conference on Machine Learning (ICML, 2021).

[8] Lihua Lei Jitendra Malik Stephen Bates, Anastasios Angelopoulos and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. CoRR, abs/2101.02703, 2021. URL https://arxiv.org/abs/2101.02703.39

[9] Angelopoulos, Anastasios N., Stephen, Bates, Adam, Fisch, Lihua, Lei, and Tal, Schuster. “Conformal Risk Control.” (2022).

📝 License

MAPIE is free and open-source software licensed under the 3-clause BSD license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

MAPIE-0.6.1.tar.gz (117.8 kB view details)

Uploaded Source

Built Distribution

MAPIE-0.6.1-py3-none-any.whl (98.7 kB view details)

Uploaded Python 3

File details

Details for the file MAPIE-0.6.1.tar.gz.

File metadata

  • Download URL: MAPIE-0.6.1.tar.gz
  • Upload date:
  • Size: 117.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for MAPIE-0.6.1.tar.gz
Algorithm Hash digest
SHA256 3afdb16068f3f7299b2ecb99e28e4aa0e0007694f380ebe3dfe8c53d55bf826b
MD5 8a33ffa89081f112d61f8147b66c4a85
BLAKE2b-256 a01784546ba95fbe87aa8814ec08d9ab55564623576ca602e761b3c1d06d4a7d

See more details on using hashes here.

File details

Details for the file MAPIE-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: MAPIE-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 98.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for MAPIE-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 1d559cf907e21cb94df668f9b9a161e075a469ef3299866f05c836e316628be9
MD5 2f8e30147d3da2c95890d45c85d31c35
BLAKE2b-256 51f309b054234851fcdf426872ea81c1ebe9cfa2e8e474d9f3626a9b5b5de5cb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page