Extract calibrated explanations from machine learning models.
Project description
Calibrated Explanations (Documentation)
Calibrated Explanations is an explanation method for machine learning designed to enhance both the interpretability of model predictions and the quantification of uncertainty. In many real-world applications, understanding how confident a model is about its predictions is just as important as the predictions themselves. This framework provides calibrated explanations for both predictions and feature importance by quantifying aleatoric and epistemic uncertainty — two types of uncertainty that offer critical insights into both data and model reliability.
-
Aleatoric uncertainty represents the noise inherent in the data. It affects the spread of probability distributions (for probabilistic outcomes) and predictions (for regression). This uncertainty is irreducible because it reflects limitations in the data generation process itself. Incorporating calibration ensures accurate aleatoric uncertainty.
-
Epistemic uncertainty arises from the model's lack of knowledge due to limited training data or insufficient complexity. It affects the model’s confidence in its output when it encounters unfamiliar or out-of-distribution data. Unlike aleatoric uncertainty, epistemic uncertainty is reducible — it can be minimized by gathering more data, improving the model architecture, or refining features.
By providing estimates for both aleatoric and epistemic uncertainty, Calibrated Explanations offers a more comprehensive understanding of predictions, both in terms of accuracy and confidence. This is particularly valuable in high-stakes environments where model reliability and interpretability are essential, such as in healthcare, finance, and autonomous systems.
For an in-depth guide on how to start using Calibrated Explanations, refer to the Getting Started section below.
Core Features:
- Calibrated Prediction Confidence: Obtain well-calibrated uncertainty estimates for predictions, helping users make informed decisions based on the model’s confidence.
- Uncertainty-Aware Feature Importance: Understand not only which features are important but also how uncertain the model is about the contribution of those features.
- Support for Various Tasks: The framework supports classification, regression, and probabilistic regression, making it adaptable to a wide range of machine learning problems.
The ability to quantify both aleatoric and epistemic uncertainty provides practitioners with actionable insights into the reliability of predictions and explanations, fostering appropriate trust (read paper) and transparency in machine learning models.
Distinctive Characteristics of Calibrated Explanations
Calibrated Explanations offers a range of features designed to enhance both the interpretability and reliability of machine learning models. These characteristics can be summarized as follows:
-
Fast, reliable, stable, and robust feature importance explanations for:
- Binary classification models (Read paper).
- Multi-class classification models (Read paper, Slides).
- Regression models (Read paper), including:
- Probabilistic explanations: Provides the probability that the target exceeds a user-defined threshold.
- Difficulty-adaptable explanations: Adjust explanations based on conformal normalization for varying levels of data difficulty.
-
Aleatoric and epistemic uncertainty estimates: These estimates are provided by Venn-Abers for probabilistic explanations and by Conformal Predictive Systems for regression tasks. Both these techniques are grounded in solid theoretical foundations, leveraging conformal prediction and Venn prediction to ensure reliability and robustness in uncertainty quantification.
-
Calibration of the underlying model: Ensures that predictions accurately reflect reality, improving trust in model outputs.
-
Comprehensive uncertainty quantification:
- Prediction uncertainty: Quantifies both aleatoric and epistemic uncertainties for the model’s predictions.
- Feature importance uncertainty: Measures uncertainty in feature importance scores, helping to assess the reliability of each feature's contribution.
-
Proximity-based rules for straightforward interpretation: Generates rules that are easily interpretable by relating instance values to feature importance weights.
-
Alternative explanations with uncertainty quantification: Provides explanations for how predicted outcomes would change if specific input features were modified, including uncertainty estimates for these alternative outcomes.
-
Conjunctional rules: Provides feature importance explanations for interactions between multiple features, highlighting joint contributions (discussed in detail in the regression paper).
-
Conditional rules for contextual explanations: Allows users to create explanations conditioned on specific criteria, enabling better handling of e.g. fairness and bias constraints (Read paper). Using conformal terminology, this means that Mondrian categories are supported.
Example Explanation
Below is an example of a probabilistic alternative explanation for an instance from the California Housing regression dataset, with a threshold set at 180,000. The light red area in the background represents the calibrated probability interval for the prediction being below the threshold, as determined by the underlying model using a Conformal Predictive System to generate a probability estimate and Venn-Abers to generate epistemic uncertainty.
The darker red bars for each rule (seen to the left) show the probability intervals provided by Venn-Abers, indicating how the likelihood of the outcome changes when specific feature values (seen to the right) are modified according to the rule conditions.
Getting started
The notebooks folder contains a number of notebooks illustrating different use cases for calibrated-explanations
. The quickstart_wrap, using the WrapCalibratedExplainer
class, is similar to this Getting Started, including plots and output.
The notebooks listed below are using the CalibratedExplainer
class. They showcase a number of different use cases, as indicated by their names:
- quickstart - similar to this Getting Started, but without a wrapper class.
- demo_binary_classification - with examples for binary classification
- demo_multiclass - with examples for multi-class classification
- demo_regression - with examples for regression
- demo_probabilistic_regression - with examples for regression with thresholds
- demo_under_the_hood - illustrating how to access the information composing the explanations
Classification
Let us illustrate how we may use calibrated_explanations
to generate explanations from a classifier trained on a dataset from
www.openml.org, which we first split into a
training and a test set using train_test_split
from
sklearn, and then further split the
training set into a proper training set and a calibration set:
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
dataset = fetch_openml(name="wine", version=7, as_frame=True, parser='auto')
X = dataset.data.values.astype(float)
y = (dataset.target.values == 'True').astype(int)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=2, stratify=y)
X_prop_train, X_cal, y_prop_train, y_cal = train_test_split(X_train, y_train,
test_size=0.25)
We now create our wrapper object, using a RandomForestClassifier
as learner.
from sklearn.ensemble import RandomForestClassifier
from calibrated_explanations import WrapCalibratedExplainer, __version__
print(f"calibrated_explanations {__version__}")
classifier = WrapCalibratedExplainer(RandomForestClassifier())
display(classifier)
We now fit our model using the proper training set.
classifier.fit(X_prop_train, y_prop_train)
display(classifier)
The WrapCalibratedExplainer
class has a predict
and a predict_proba
method that returns the predictions and probability estimates of the underlying classifier. If the model is not yet calibrated, then the underlying models predict
and predict_proba
methods are used. If the model is calibrated, then the predict
and predict_proba
method of the calibration model is used.
print('Uncalibrated prediction (probability estimates):')
print(f'{classifier.predict(X_test)} ({classifier.predict_proba(X_test)})')
Before we can generate explanations, we need to calibrate our model using the calibration set.
classifier.calibrate(X_cal, y_cal)
display(classifier)
Once the model is calibrated, the predict
and predict_proba
methods produce calibrated predictions and probability estimates.
proba, (low, high) = classifier.predict_proba(X_test, uq_interval=True)
print('Calibrated prediction (probability estimates):')
print(f'{classifier.predict(X_test)} ({proba})')
print('Calibrated uncertainty interval for the positive class:')
print([(low[i], high[i]) for i in range(len(low))])
Factual Explanations
Let us explain a test instance using our WrapCalibratedExplainer
object. The method used to get factual explanations is explain_factual
.
factual_explanations = classifier.explain_factual(X_test)
display(classifier)
Once we have the explanations, we can plot all of them using the plot
function. Default, a regular plot, without uncertainty intervals included, is created. To include uncertainty intervals, change the parameter uncertainty=True
. To plot only a single instance, the plot
function can be called, submitting the index of the test instance to plot.
factual_explanations.plot()
factual_explanations.plot(uncertainty=True)
factual_explanations.plot(0, uncertainty=True)
You can also add and remove conjunctive rules.
factual_explanations.add_conjunctions().plot(0)
factual_explanations.plot(0, uncertainty=True)
factual_explanations.remove_conjunctions().plot(0, uncertainty=True)
Explore Alternative Explanations
An alternative to factual rules is to extract alternative rules, which is done using the explore_alternatives
function. Alternative explanations provides insights on how predicted outcomes would change if specific input features were modified, including uncertainty estimates for these alternative outcomes.
alternative_explanations = classifier.explore_alternatives(X_test)
display(classifier)
Alternatives are also visualized using the plot
function. Plotting an individual alternative explanation is done using plot
, submitting the index to plot. Adding or removing conjunctions is done as before.
alternative_explanations.plot()
alternative_explanations.add_conjunctions().plot()
alternative_explanations.plot(0)
calibrated_explanations
supports multiclass which is demonstrated in demo_multiclass. That notebook also demonstrates how both feature names and target and categorical labels can be added to improve the interpretability.
Regression
Extracting explanations for regression is very similar to how it is done for classification. First we load and divide the dataset. The target is divided by 1000, meaning that the target is in thousands of dollars.
dataset = fetch_openml(name="house_sales", version=3)
X = dataset.data.values.astype(float)
y = dataset.target.values/1000
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=2, random_state=42)
X_prop_train, X_cal, y_prop_train, y_cal = train_test_split(X_train, y_train,
test_size=200)
We now create our wrapper object, using a RandomForestRegressor
as learner.
from sklearn.ensemble import RandomForestRegressor
regressor = WrapCalibratedExplainer(RandomForestRegressor())
display(regressor)
We now fit our model using the proper training set.
regressor.fit(X_prop_train, y_prop_train)
display(regressor)
The WrapCalibratedExplainer
class has a predict
method that returns the predictions and probability estimates of the underlying classifier. If the model is not yet calibrated, then the underlying models predict
method is used. If the model is calibrated, then the predict
method of the calibration model is used.
print('Uncalibrated model prediction:')
print(regressor.predict(X_test))
Before we can generate explanations, we need to calibrate our model using the calibration set.
regressor.calibrate(X_cal, y_cal)
display(regressor)
We can easily add a difficulty estimator by assigning a DifficultyEstimator
to the difficulty_estimator
attribute when calibrating the model.
from crepes.extras import DifficultyEstimator
de = DifficultyEstimator().fit(X=X_prop_train, learner=regressor.learner, scaler=True)
regressor.calibrate(X_cal, y_cal, difficulty_estimator=de)
display(regressor)
A DifficultyEstimator
can also be assigned to an already calibrated model using the set_difficult_estimator
method. Using set_difficult_estimator(None)
removes any previously assigned DifficultyEstimator
.
Once the model is calibrated, the predict
method produce calibrated predictions with uncertainties. The default confidence is 90 per cent, which can be altered using the low_high_percentiles
parameter.
prediction, (low, high) = regressor.predict(X_test, uq_interval=True) # default low_high_percentiles=(5, 95)
print('Calibrated prediction:')
print(prediction)
print('Calibrated uncertainty interval:')
print([(low[i], high[i]) for i in range(len(low))])
You can also get the probability of the prediction being below a certain threshold using predict_proba
by assigning the threshold
parameter.
prediction = regressor.predict(X_test, threshold=200)
print('Calibrated probabilistic prediction:')
print(prediction)
proba, (low, high) = regressor.predict_proba(X_test, uq_interval=True, threshold=200)
print('Calibrated probabilistic probability estimate [y_hat > threshold, y_hat <= threshold]:')
print(proba)
print('Calibrated probabilistic uncertainty interval for y_hat <= threshold:')
print([(low[i], high[i]) for i in range(len(low))])
Factual Explanations
Let us explain a test instance using our WrapCalibratedExplainer
object. The method used to get factual explanations is explain_factual
.
factual_explanations = regressor.explain_factual(X_test) # default low_high_percentiles=(5, 95)
display(regressor)
Regression also offer both regular and uncertainty plots for factual explanations with or without conjunctive rules, in almost exactly the same way as for classification.
factual_explanations.plot()
factual_explanations.plot(uncertainty=True)
factual_explanations.add_conjunctions().plot(uncertainty=True)
Default, the confidence interval is set to a symmetric interval of 90% (defined as low_high_percentiles=(5,95)
). The intervals can cover any user specified interval, including one-sided intervals. To define a one-sided upper-bounded 90% interval, set low_high_percentiles=(-np.inf,90)
, and to define a one-sided lower-bounded 95% interval, set low_high_percentiles=(5,np.inf)
. Percentiles can also be set to any other values in the range (0,100) (exclusive), and intervals do not have to be symmetric.
lower_bounded_explanations = regressor.explain_factual(X_test, low_high_percentiles=(5,np.inf))
asymmetric_explanations = regressor.explain_factual(X_test, low_high_percentiles=(5,75))
Explore Alternative Explanations
The explore_alternatives
will work exactly the same as for classification.
alternative_explanations = regressor.explore_alternatives(X_test) # default low_high_percentiles=(5, 95)
display(regressor)
Alternative plots work as for classification.
alternative_explanations.plot()
alternative_explanations.add_conjunctions().plot()
Probabilistic Regression
The difference between probabilistic regression and regular regression is that the former returns a probability of the prediction being below a certain threshold. This could for example be useful when the prediction is a time to an event, such as time to death or time to failure.
probabilistic_factual_explanations = regressor.explain_factual(X_test, threshold=200)
probabilistic_factual_explanations.plot()
probabilistic_factual_explanations.plot(uncertainty=True)
probabilistic_alternative_explanations = regressor.explore_alternatives(X_test, threshold=200)
probabilistic_alternative_explanations.plot()
Regression offers many more options but to learn more about them, see the demo_regression or the demo_probabilistic_regression notebooks.
Alternative ways to initialize WrapCalibratedExplainer
A WrapCalibratedExplainer
can also be initialized with a trained model or with a CalibratedExplainer
object, as is examplified below.
fitted_classifier = WrapCalibratedExplainer(classifier.learner)
display(fitted_classifier)
calibrated_classifier = WrapCalibratedExplainer(classifier.explainer)
display(calibrated_classifier)
fitted_regressor = WrapCalibratedExplainer(regressor.learner)
display(fitted_regressor)
calibrated_regressor = WrapCalibratedExplainer(regressor.explainer)
display(calibrated_regressor)
When a calibrated explainer is re-fitted, the explainer is reinitialized.
Known Limitations
The implementation currently only support numerical input. Use the utils.helper.transform_to_numeric
(released in version v0.3.1) to transform a DataFrame
with text data into numerical form and at the same time extracting categorical_features
, categorical_labels
, target_labels
(if text labels) and mappings
(used to apply the same mappings to new data) to be used as input to the CalibratedExplainer
. The algorithm does not currently support image data.
See e.g. the Conditional Fairness Experiment for examples on how it can be used.
Install
From PyPI:
Install calibrated-explanations
from PyPI:
pip install calibrated-explanations
From conda-forge:
Alternatively, you can install it from conda-forge:
conda install -c conda-forge calibrated-explanations
Dependencies:
The following dependencies are required and will be installed automatically if not already present:
Contributing
Contributions are welcome. Please send bug reports, feature requests or pull requests through the project page on GitHub. You can find a detailed guide for contributions in CONTRIBUTING.md.
Documentation
For documentation, see calibrated-explanations.readthedocs.io.
Further reading and citing
If you use calibrated-explanations
for a scientific publication, you are kindly requested to cite one of the following papers:
Published papers
- Löfström, H. (2023). Trustworthy explanations: Improved decision support through well-calibrated uncertainty quantification (Doctoral dissertation, Jönköping University, Jönköping International Business School).
- Löfström, H., Löfström, T., Johansson, U., and Sönströd, C. (2024). Calibrated Explanations: with Uncertainty Information and Counterfactuals. Expert Systems with Applications, 1-27.
- Löfström, H., Löfström, T. (2024). Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2153. Springer, Cham.
- Löfström, T., Löfström, H., Johansson, U. (2024). Calibrated Explanations for Multi-class. Proceedings of the Thirteenth Workshop on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research, PMLR 230:175-194. Presentation
The paper that originated the idea of calibrated-explanations
is:
- Löfström, H., Löfström, T., Johansson, U., & Sönströd, C. (2023). Investigating the impact of calibration on the quality of explanations. Annals of Mathematics and Artificial Intelligence, 1-18. Code and results.
Preprints:
- Löfström, T., Löfström, H., Johansson, U., Sönströd, C., and Matela, R. (2024). Calibrated Explanations for Regression. arXiv preprint arXiv:2308.16245.
- Löfström, H., Löfström, T., and Hallberg Szabadvary, J. (2024). Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions. arXiv preprint arXiv:2410.05479.
Citing and bibtex
If you use calibrated-explanations
for a scientific publication, you are kindly requested to cite one of the papers above. Bibtex entries can be found in citing.
Acknowledgements
This research is funded by the Swedish Knowledge Foundation together with industrial partners supporting the research and education environment on Knowledge Intensive Product Realization SPARK at Jönköping University, Sweden, through projects: AFAIR grant no. 20200223, ETIAI grant no. 20230040, and PREMACOP grant no. 20220187. Helena Löfström was initially a PhD student in the Industrial Graduate School in Digital Retailing (INSiDR) at the University of Borås, funded by the Swedish Knowledge Foundation, grant no. 20160035.
Rudy Matela has been our git guru and has helped us with the release process.
We have used both the ConformalPredictiveSystem
and DifficultyEstimator
classes from Henrik Boströms crepes package to provide support for regression. The MondrianCategorizer
class is also supported in the WrapCalibratedExplainer
as an alternative to using the bins
parameter to create conditional explanations.
We have used the VennAbers
class from Ivan Petejs venn-abers package to provide support for probabilistic explanations (both classification and probabilistic regression).
The FastExplanation
, created using the explain_fast
method, is incorporating ideas and code from ConformaSight developed by Fatima Rabia Yapicioglu, Allesandra Stramiglio, and Fabio Vitali.
We are using Decision Trees from scikit-learn
in the discretizers.
We have copied code from Marco Tulio Correia Ribeiros lime package for the Discretizer
class.
The check_is_fitted
and safe_instance
functions in calibrated_explanations.utils
are copied from sklearn
and shap
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file calibrated_explanations-0.5.0.tar.gz
.
File metadata
- Download URL: calibrated_explanations-0.5.0.tar.gz
- Upload date:
- Size: 78.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | de31e40211bd8761315b5b6bd22516acda43caa8f685de27d38627bd33fc4a08 |
|
MD5 | fa8f979c2827c54b8ec086acf2fa7f6b |
|
BLAKE2b-256 | f96be9a6a6c64759ed0b7bdd7689a9e4cec3ed5914921fe180d7a636b26afade |
File details
Details for the file calibrated_explanations-0.5.0-py3-none-any.whl
.
File metadata
- Download URL: calibrated_explanations-0.5.0-py3-none-any.whl
- Upload date:
- Size: 65.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 12a35b3b13e967e642765d17b74082ffe785956f9706b6612ad4e58b76fa5533 |
|
MD5 | 8d695f2629a40d61b7ac1b3dfd44e2df |
|
BLAKE2b-256 | aa11c9ec1b87c981234c66410d2cefc4ce0ecc63db7931f7a3114b8efd31d778 |