Algorithms for monitoring and explaining machine learning models
Project description
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.
Goals
- Provide high quality reference implementations of black-box ML model explanation algorithms
- Define a consistent API for interpretable ML methods
- Support multiple use cases (e.g. tabular, text and image data classification, regression)
- Implement the latest model explanation, concept drift, algorithmic bias detection and other ML model monitoring and interpretation methods
Installation
Alibi can be installed from PyPI:
pip install alibi
Examples
Anchor method applied to the InceptionV3 model trained on ImageNet:
Prediction: Persian Cat | Anchor explanation |
---|---|
Contrastive Explanation method applied to a CNN trained on MNIST:
Prediction: 4 | Pertinent Negative: 9 | Pertinent Positive: 4 |
---|---|---|
Trust scores applied to a softmax classifier trained on MNIST:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
alibi-0.2.0.tar.gz
(46.9 kB
view hashes)
Built Distribution
alibi-0.2.0-py3-none-any.whl
(59.8 kB
view hashes)