Benchmark for Explainable AI methods
Project description
xai-benchmark
Public and extensible benchmark for XAI methods
Description
XAIB is an open benchmark that provides a way to compare different XAI methods using broad set of metrics that aimed to measure different aspects of interpretability
Installation
git clone https://github.com/Oxid15/xai-benchmark.git
cd xai-benchmark
pip install .
Then you can verify your installation by importing:
import xaib
print(xaib.__version__)
Usage
To test your own XAI method, you need to wrap it into Explainer
interface (for examples of doing that see explainers
folder) and pass it into explainers
dict of evaluation notebook. For example for feature importance methods go to evaluation/feature_importance/feature_importance.ipynb
.
Results
Metric values of all tested algorithms and baselines can be found in evaluation
folder in the folder corresponding to the type of method for example for feature importance methods you can go to evaluation/feature_importance/feature_importance.ipynb
and found metric values there.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for xai_benchmark-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6f64675830d6c2e806de61446b437f5561170a71b92a5fc99f8ee833e533aeea |
|
MD5 | 3eee6c3184bdf5bf1ec29db71139f5ba |
|
BLAKE2b-256 | 9c085057d632a16e9a21158be41c88c3121c8d62f0ca9ef21bbe3fc8dc581687 |