Generic explainability architecture for text machine learning models
Project description
A generic explainability architecture for explaining text machine learning models.
Marcel Robeer, 2021
Installation
See installation.md for an extended installation guide.
Method | Instructions |
---|---|
pip |
Install from PyPI via pip3 install text_explainability . |
Local | Clone this repository and install via pip3 install -e . or locally run python3 setup.py install . |
Example usage
See example_usage.md to see an example of how the package can be used, or run the lines in example_usage.py
to do explore it interactively.
Explanation methods included
text_explainability
includes methods for model-agnostic local explanation and global explanation. Each of these methods can be fully customized to fit the explainees' needs.
Type | Explanation method | Description | Paper/link |
---|---|---|---|
Local explanation | LIME |
Calculate feature attribution with Local Intepretable Model-Agnostic Explanations (LIME). | [Ribeiro2016], interpretable-ml/lime |
KernelSHAP |
Calculate feature attribution with Shapley Additive Explanations (SHAP). | [Lundberg2017], interpretable-ml/shap | |
LocalTree |
Fit a local decision tree around a single decision. | [Guidotti2018] | |
LocalRules |
Fit a local sparse set of label-specific rules using SkopeRules . |
github/skope-rules | |
FoilTree |
Fit a local contrastive/counterfactual decision tree around a single decision. | [Robeer2018] | |
Global explanation | TokenFrequency |
Show the top-k number of tokens for each ground-truth or predicted label. | |
TokenInformation |
Show the top-k token mutual information for a dataset or model. | wikipedia/mutual_information | |
KMedoids |
Embed instances and find top-n prototypes (can also be performed for each label using LabelwiseKMedoids ). |
interpretable-ml/prototypes | |
MMDCritic |
Embed instances and find top-n prototypes and top-n criticisms (can also be performed for each label using LabelwiseMMDCritic ). |
[Kim2016], interpretable-ml/prototypes |
Releases
text_explainability
is officially released through PyPI.
See CHANGELOG.md for a full overview of the changes for each version.
Maintenance
Contributors
- Marcel Robeer (
@m.j.robeer
) - Michiel Bron (
@mpbron-phd
)
Todo
Tasks yet to be done:
- Implement local post-hoc explanations:
- Implement Anchors
- Implement global post-hoc explanations:
- Representative subset
- Add support for regression models
- More complex data augmentation
- Top-k replacement (e.g. according to LM / WordNet)
- Tokens to exclude from being changed
- Bag-of-words style replacements
- Add rule-based return type
- Write more tests
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for text_explainability-0.4.5.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 74aa8a0cc090a91cf0d298b15e4edcc6465cb1629782281070679e2e22e6f82f |
|
MD5 | 5134794ef1128ddb85101bf4ef5fae5d |
|
BLAKE2b-256 | f9c77f49c510c91d69610ead8cd017a30125b3e8596e378146c879838c22d282 |
Hashes for text_explainability-0.4.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e29569645394a88c4fffde3b694caf48118bcf5ba27c32bbea0e7a2fb0432a1a |
|
MD5 | c59cb498313d3c262525295052ba4864 |
|
BLAKE2b-256 | f3d84ce820ce1fd89475e1893d96e73d0456514b52f2ed6472fa020912980486 |