Generic explainability architecture for text machine learning models
Project description
A generic explainability architecture for explaining text machine learning models.
Marcel Robeer, 2021
Installation
Method | Instructions |
---|---|
pip |
Install from PyPI via pip3 install text_explainability . |
Local | Clone this repository and install via pip3 install -e . or locally run python3 setup.py install . |
Example usage
Run lines in example_usage.py
to see an example of how the package can be used.
Explanation methods included
text_explainability
includes methods for model-agnostic local explanation and global explanation. Each of these methods can be fully customized to fit the explainees' needs.
Type | Explanation method | Description | Paper/link |
---|---|---|---|
Local explanation | LIME |
Calculate feature attribution with Local Intepretable Model-Agnostic Explanations (LIME). | [Ribeiro2016], interpretable-ml/lime |
KernelSHAP |
Calculate feature attribution with Shapley Additive Explanations (SHAP). | [Lundberg2017], interpretable-ml/shap | |
LocalTree |
Fit a local decision tree around a single decision. | [Guidotti2018] | |
FoilTree |
Fit a local contrastive/counterfactual decision tree around a single decision. | [Robeer2018] | |
Global explanation | TokenFrequency |
Show the top-k number of tokens for each ground-truth or predicted label. | |
TokenInformation |
Show the top-k token mutual information for a dataset or model. | wikipedia/mutual_information | |
KMedoids |
Embed instances and find top-n prototypes (can also be performed for each label using LabelwiseKMedoids ). |
interpretable-ml/prototypes | |
MMDCritic |
Embed instances and find top-n prototypes and top-n criticisms (can also be performed for each label using LabelwiseMMDCritic ). |
[Kim2016], interpretable-ml/prototypes |
Releases
text_explainability
is officially released through PyPI.
See CHANGELOG.md for a full overview of the changes for each version.
Maintenance
Contributors
- Marcel Robeer (
@m.j.robeer
) - Michiel Bron (
@mpbron-phd
)
Todo
Tasks yet to be done:
- Implement local post-hoc explanations:
- Implement Anchors
- Implement global post-hoc explanations:
- Representative subset
- Add support for regression models
- More complex data augmentation
- Top-k replacement (e.g. according to LM / WordNet)
- Tokens to exclude from being changed
- Bag-of-words style replacements
- Add rule-based return type
- Write more tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
text_explainability-0.4.0.tar.gz
(28.8 kB
view hashes)
Built Distribution
Close
Hashes for text_explainability-0.4.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 69b333dc22cdd3bb9649d49a6700a37efe12e3911782880f44c7e3c6ff7dc51f |
|
MD5 | 22e66592f6e1d1352b2bf46f5be999aa |
|
BLAKE2b-256 | cde1c3c77f284183f43341b365e9ba7a258f866da12a84eea925bf5ba7dcfe1a |
Close
Hashes for text_explainability-0.4.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 97fc8358c9d32553156c921407fc0ce8edc629248bc30881866297eb57a62d23 |
|
MD5 | 30e02023cfed905469b652075e4493c1 |
|
BLAKE2b-256 | 19f351280bf626e4358d3bf01bff23cb474fd825adf75a9c1ce86403b3529b1c |