Extract calibrated explanations from machine learning models.
Project description
Calibrated Explanations (Documentation)
calibrated-explanations
is a Python package for the local feature importance explanation method called Calibrated Explanations, supporting both classification and regression.
The proposed method is based on Venn-Abers (classification & regression) and Conformal Predictive Systems (regression) and has the following characteristics:
- Fast, reliable, stable and robust feature importance explanations for:
- Binary classification models (read paper)
- Multi-class classification models (read paper)
- Regression models (read paper)
- Including probabilistic explanations of the probability that the target exceeds a user-defined threshold
- With difficulty adaptable explanations (conformal normalization)
- Calibration of the underlying model to ensure that predictions reflect reality.
- Uncertainty quantification of the prediction from the underlying model and the feature importance weights.
- Rules with straightforward interpretation in relation to instance values and feature weights.
- Possibility to generate counterfactual rules with uncertainty quantification of the expected predictions.
- Conjunctional rules conveying feature importance for the interaction of included features.
- Conditional rules, allowing users the ability to create contextual explanations to handle e.g. bias and fairness constraints (read paper).
Below is an example of a probabilistic counterfactual explanation for an instance of the regression dataset California Housing (with the threshold 180 000). The light red area in the background is representing the calibrated probability interval (for the prediction being below the threshold) of the underlying model, as indicated by a Conformal Predictive System and calibrated through Venn-Abers. The darker red bars for each rule show the probability intervals that Venn-Abers indicate for an instance changing a feature value in accordance with the rule condition.
The table summarizes the characteristics of Calibrated Explanations.
Standard | Probabilistic | ||||||||
---|---|---|---|---|---|---|---|---|---|
Classification | Regression | Regression | |||||||
Characteristics | FR | FU | CF | FR | FU | CF | FR | FU | CF |
Feature Weight w/o CI | X | X | X | ||||||
Feature Weight with CI | X | X | X | ||||||
Rule Prediction with CI | X | X | X | ||||||
Two-sided CI | I | I | I | I | I | I | I | I | I |
Lower-bounded CI | I | I | |||||||
Upper-bounded CI | I | I | |||||||
Conjunctive Rules | O | O | O | O | O | O | O | O | O |
Conditional Rules | O | O | O | O | O | O | O | O | O |
Difficulty Estimation | O | O | O | O | O | O | |||
# Alternative Setups | 1 | 1 | 1 | 5 | 5 | 5 | 5 | 5 | 5 |
All explanations include the calibrated prediction, with confidence intervals (CI), of the explained instance.
- FR refers to factual explanations visualized using regular plots
- FU refers to factual explanations visualized using uncertainty plots
- CF refers to counterfactual explanations and plots
- X marks a core alternative
- I marks possible interval type(s)
- O marks optional additions
The example plot above, showing a counterfactual probabilistic regression explanation, corresponds to the last column without any optional additions.
Getting started
The notebooks folder contains a number of notebooks illustrating different use cases for calibrated-explanations
. The quickstart_wrap, using the WrapCalibratedExplainer
class, is similar to this Getting Started, including plots and output.
The notebooks listed below are using the CalibratedExplainer
class. They showcase a number of different use cases, as indicated by their names:
- quickstart - similar to this Getting Started, but without a wrapper class.
- demo_binary_classification - with examples for binary classification
- demo_multiclass - with examples for multi-class classification
- demo_regression - with examples for regression
- demo_probabilistic_regression - with examples for regression with thresholds
- demo_under_the_hood - illustrating how to access the information composing the explanations
Classification
Let us illustrate how we may use calibrated_explanations
to generate explanations from a classifier trained on a dataset from
www.openml.org, which we first split into a
training and a test set using train_test_split
from
sklearn, and then further split the
training set into a proper training set and a calibration set:
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
dataset = fetch_openml(name="wine", version=7, as_frame=True, parser='auto')
X = dataset.data.values.astype(float)
y = (dataset.target.values == 'True').astype(int)
feature_names = dataset.feature_names
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=2, stratify=y)
X_prop_train, X_cal, y_prop_train, y_cal = train_test_split(X_train, y_train,
test_size=0.25)
We now create our wrapper object, using a RandomForestClassifier
as learner.
from sklearn.ensemble import RandomForestClassifier
from calibrated_explanations import WrapCalibratedExplainer, __version__
print(f"calibrated_explanations {__version__}")
classifier = WrapCalibratedExplainer(RandomForestClassifier())
display(classifier)
We now fit our model using the proper training set.
classifier.fit(X_prop_train, y_prop_train)
display(classifier)
The WrapCalibratedExplainer
class has a predict
and a predict_proba
method that returns the predictions and probability estimates of the underlying classifier. If the model is not yet calibrated, then the underlying models predict
and predict_proba
methods are used. If the model is calibrated, then the predict
and predict_proba
method of the calibration model is used.
print(f'Uncalibrated probability estimates: \n{classifier.predict_proba(X_test)}')
Before we can generate explanations, we need to calibrate our model using the calibration set.
classifier.calibrate(X_cal, y_cal, feature_names=feature_names)
display(classifier)
Once the model is calibrated, the predict
and predict_proba
methods produce calibrated predictions and probability estimates.
proba, (low, high) = classifier.predict_proba(X_test, uq_interval=True)
print(f'Calibrated probability estimates: \n{proba}')
print(f'Calibrated uncertainty interval for the positive class: [{[(low[i], high[i]) for i in range(len(low))]}]')
Factual Explanations
Let us explain a test instance using our WrapCalibratedExplainer
object. The method used to get factual explanations is explain_factual
.
factual_explanations = classifier.explain_factual(X_test)
display(classifier)
Once we have the explanations, we can plot all of them using the plot
function. Default, a regular plot, without uncertainty intervals included, is created. To include uncertainty intervals, change the parameter uncertainty=True
. To plot only a single instance, the plot
function can be called, submitting the index of the test instance to plot.
factual_explanations.plot()
factual_explanations.plot(uncertainty=True)
factual_explanations.plot(0, uncertainty=True)
You can also add and remove conjunctive rules.
factual_explanations.add_conjunctions().plot(0)
factual_explanations.plot(0, uncertainty=True)
factual_explanations.remove_conjunctions().plot(0, uncertainty=True)
Counterfactual Explanations
An alternative to factual rules is to extract counterfactual rules, which is done using the explain_counterfactual
function.
counterfactual_explanations = classifier.explain_counterfactual(X_test)
display(classifier)
Counterfactuals are also visualized using the plot
function. Plotting an individual counterfactual explanation is done using plot
, submitting the index to plot. Adding or removing conjunctions is done as before.
counterfactual_explanations.plot()
counterfactual_explanations.add_conjunctions().plot()
counterfactual_explanations.plot(0)
calibrated_explanations
supports multiclass which is demonstrated in demo_multiclass. That notebook also demonstrates how both feature names and target and categorical labels can be added to improve the interpretability.
Regression
Extracting explanations for regression is very similar to how it is done for classification. First we load and divide the dataset. The target is divided by 1000, meaning that the target is in thousands of dollars.
dataset = fetch_openml(name="house_sales", version=3)
X = dataset.data.values.astype(float)
y = dataset.target.values/1000
feature_names = dataset.feature_names
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=2, random_state=42)
X_prop_train, X_cal, y_prop_train, y_cal = train_test_split(X_train, y_train,
test_size=200)
We now create our wrapper object, using a RandomForestRegressor
as learner.
from sklearn.ensemble import RandomForestRegressor
regressor = WrapCalibratedExplainer(RandomForestRegressor())
display(regressor)
We now fit our model using the proper training set.
regressor.fit(X_prop_train, y_prop_train)
display(regressor)
The WrapCalibratedExplainer
class has a predict
method that returns the predictions and probability estimates of the underlying classifier. If the model is not yet calibrated, then the underlying models predict
method is used. If the model is calibrated, then the predict
method of the calibration model is used.
print(f'Uncalibrated model prediction: \n{regressor.predict(X_test)}')
Before we can generate explanations, we need to calibrate our model using the calibration set.
regressor.calibrate(X_cal, y_cal, feature_names=feature_names)
display(regressor)
We can easily add a difficulty estimator by assigning a DifficultyEstimator
to the difficulty_estimator
attribute when calibrating the model.
from crepes.extras import DifficultyEstimator
regressor.calibrate(X_cal, y_cal, feature_names=feature_names,
difficulty_estimator=DifficultyEstimator().fit(X=X_prop_train, learner=regressor.learner, scaler=True))
display(regressor)
Once the model is calibrated, the predict
method produce calibrated predictions with uncertainties. The default confidence is 90 per cent, which can be altered using the low_high_percentiles
parameter.
prediction, (low, high) = regressor.predict(X_test, uq_interval=True, low_high_percentiles=(5, 95))
print(f'Calibrated prediction: \n{prediction}')
print(f'Calibrated uncertainty interval: [{[(low[i], high[i]) for i in range(len(low))]}]')
You can also get the probability of the prediction being below a certain threshold using predict_proba
by assigning the threshold
parameter.
prediction = regressor.predict(X_test, threshold=200)
print(f'Calibrated probabilistic prediction: {prediction}')
proba, (low, high) = regressor.predict_proba(X_test, uq_interval=True, threshold=200)
print(f'Calibrated probabilistic probability estimate [y_hat > threshold, y_hat <= threshold]: \n{proba}')
print(f'Calibrated probabilistic uncertainty interval for y_hat <= threshold: [{[(low[i], high[i]) for i in range(len(low))]}]')
Factual Explanations
Let us explain a test instance using our WrapCalibratedExplainer
object. The method used to get factual explanations is explain_factual
.
factual_explanations = regressor.explain_factual(X_test)
display(regressor)
Regression also offer both regular and uncertainty plots for factual explanations with or without conjunctive rules, in almost exactly the same way as for classification.
factual_explanations.plot()
factual_explanations.plot(uncertainty=True)
factual_explanations.add_conjunctions().plot(uncertainty=True)
Default, the confidence interval is set to a symmetric interval of 90% (defined as low_high_percentiles=(5,95)
). The intervals can cover any user specified interval, including one-sided intervals. To define a one-sided upper-bounded 90% interval, set low_high_percentiles=(-np.inf,90)
, and to define a one-sided lower-bounded 95% interval, set low_high_percentiles=(5,np.inf)
. Percentiles can also be set to any other values in the range (0,100) (exclusive), and intervals do not have to be symmetric.
lower_bounded_explanations = regressor.explain_factual(X_test, low_high_percentiles=(5,np.inf))
asymmetric_explanations = regressor.explain_factual(X_test, low_high_percentiles=(5,75))
Counterfactual Explanations
The explain_counterfactual
will work exactly the same as for classification.
counterfactual_explanations = regressor.explain_counterfactual(X_test)
display(regressor)
Counterfactual plots work as for classification.
counterfactual_explanations.plot()
counterfactual_explanations.add_conjunctions().plot()
Probabilistic Regression
The difference between probabilistic regression and regular regression is that the former returns a probability of the prediction being below a certain threshold. This could for example be useful when the prediction is a time to an event, such as time to death or time to failure.
probabilistic_factual_explanations = regressor.explain_factual(X_test, threshold=200)
probabilistic_factual_explanations.plot()
probabilistic_factual_explanations.plot(uncertainty=True)
probabilistic_counterfactual_explanations = regressor.explain_counterfactual(X_test, threshold=200)
probabilistic_counterfactual_explanations.plot()
Regression offers many more options but to learn more about them, see the demo_regression or the demo_probabilistic_regression notebooks.
Alternatives
A WrapCalibratedExplainer
can also be initialized with a trained model or with a CalibratedExplainer
object, as is examplified below.
fitted_classifier = WrapCalibratedExplainer(classifier.learner)
display(fitted_classifier)
calibrated_classifier = WrapCalibratedExplainer(classifier.explainer)
display(calibrated_classifier)
fitted_regressor = WrapCalibratedExplainer(regressor.learner)
display(fitted_regressor)
calibrated_regressor = WrapCalibratedExplainer(regressor.explainer)
display(calibrated_regressor)
When a calibrated explainer is re-fitted, the explainer is reinitialized.
Known Limitations
The implementation currently only support numerical input. Use the utils.helper.transform_to_numeric
(released in version v0.3.1) to transform a DataFrame
with text data into numerical form and at the same time extracting categorical_features
, categorical_labels
, target_labels
(if text labels) and mappings
(used to apply the same mappings to new data) to be used as input to the CalibratedExplainer
. The algorithm does not currently support image data.
See e.g. the Conditional Fairness Experiment for examples on how it can be used.
Install
calibrated-explanations
is implemented in Python, so you need a Python environment.
Install calibrated-explanations
from PyPI:
pip install calibrated-explanations
or from conda-forge:
conda install -c conda-forge calibrated-explanations
or by following further instructions at conda-forge.
The dependencies are:
Contributing
Contributions are welcome. Please send bug reports, feature requests or pull requests through the project page on GitHub. You can find a detailed guide for contributions in CONTRIBUTING.md.
Documentation
For documentation, see calibrated-explanations.readthedocs.io.
Further reading and citing
If you use calibrated-explanations
for a scientific publication, you are kindly requested to cite one of the following papers:
- Löfström, H., Löfström, T., Johansson, U., and Sönströd, C. (2024). Calibrated Explanations: with Uncertainty Information and Counterfactuals. Expert Systems with Applications, 1-27.
- Löfström, T., Löfström, H., Johansson, U., Sönströd, C., and Matela, R. Calibrated Explanations for Regression. arXiv preprint arXiv:2308.16245. Accepted to Machine Learning. In press.
- Löfström, H., Löfström, T. (2024). Conditional Calibrated Explanations: Finding a Path Between Bias and Uncertainty. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2153. Springer, Cham.
- Löfström, T., Löfström, H., Johansson, U. (2024). Calibrated Explanations for Multi-class. Proceedings of the Thirteenth Workshop on Conformal and Probabilistic Prediction and Applications, in Proceedings of Machine Learning Research. In press.
The paper that originated the idea of calibrated-explanations
is:
- Löfström, H., Löfström, T., Johansson, U., & Sönströd, C. (2023). Investigating the impact of calibration on the quality of explanations. Annals of Mathematics and Artificial Intelligence, 1-18. Code and results.
If you use calibrated-explanations
for a scientific publication, you are kindly requested to cite one of the papers above. Bibtex entries can be found in citing.
Acknowledgements
This research is funded by the Swedish Knowledge Foundation together with industrial partners supporting the research and education environment on Knowledge Intensive Product Realization SPARK at Jönköping University, Sweden, through projects: AFAIR grant no. 20200223, ETIAI grant no. 20230040, and PREMACOP grant no. 20220187. Helena Löfström was a PhD student in the Industrial Graduate School in Digital Retailing (INSiDR) at the University of Borås, funded by the Swedish Knowledge Foundation, grant no. 20160035.
Rudy Matela has been our git guru and has helped us with the release process.
We have used both the ConformalPredictiveSystem
and DifficultyEstimator
classes from Henrik Boströms crepes package to provide support for regression. The MondrianCategorizer
class is also supported in the WrapCalibratedExplainer
as an alternative to using the bins
parameter to create conditional explanations.
We have used the VennAbers
class from Ivan Petejs venn-abers package to provide support for probabilistic explanations (both classification and probabilistic regression).
We are using Decision Trees from scikit-learn
in the discretizers.
We have copied code from Marco Tulio Correia Ribeiros lime package for the Discretizer
class.
The check_is_fitted
and safe_instance
functions in calibrated_explanations.utils
are copied from sklearn
and shap
.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for calibrated_explanations-0.4.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 15bba51b3fcc4276ce56cf5a7bdb2212fdbd7d627aa2440e1956c4f460a00f11 |
|
MD5 | ddbef40d0b46376fe12050b44cbdc64f |
|
BLAKE2b-256 | 983529f45f7232b3b638a67bfbca45039253bfb17f0b4f6558b3c5e8959380d7 |
Hashes for calibrated_explanations-0.4.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1c03fb2940ec3e63d5090b5338e1a559f8ef51076ad4f6d9e71fa945f7be7317 |
|
MD5 | 245c5fc9156d9ab6f0a05684e7532ddf |
|
BLAKE2b-256 | 5b0bb95366556e5610e1f31e5f66d74c666b4ed5d9a38de9f31dbcddadacb49b |