Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more
Project description
Explainiverse
Explainiverse is a unified, extensible Python framework for Explainable AI (XAI).
It provides a standardized interface for model-agnostic explainability with 9 state-of-the-art XAI methods, evaluation metrics, and a plugin registry for easy extensibility.
Features
🎯 Comprehensive XAI Coverage
Local Explainers (instance-level explanations):
- LIME - Local Interpretable Model-agnostic Explanations (Ribeiro et al., 2016)
- SHAP - SHapley Additive exPlanations via KernelSHAP (Lundberg & Lee, 2017)
- TreeSHAP - Exact SHAP values for tree models, 10x+ faster (Lundberg et al., 2018)
- Anchors - High-precision rule-based explanations (Ribeiro et al., 2018)
- Counterfactual - DiCE-style diverse counterfactual explanations (Mothilal et al., 2020)
Global Explainers (model-level explanations):
- Permutation Importance - Feature importance via performance degradation (Breiman, 2001)
- Partial Dependence (PDP) - Marginal feature effects (Friedman, 2001)
- ALE - Accumulated Local Effects, unbiased for correlated features (Apley & Zhu, 2020)
- SAGE - Shapley Additive Global importancE (Covert et al., 2020)
🔌 Extensible Plugin Registry
- Register custom explainers with rich metadata
- Filter by scope (local/global), model type, data type
- Automatic recommendations based on use case
📊 Evaluation Metrics
- AOPC (Area Over Perturbation Curve)
- ROAR (Remove And Retrain)
- Multiple baseline options and curve generation
🧪 Standardized Interface
- Consistent
BaseExplainerAPI - Unified
Explanationoutput format - Model adapters for sklearn and PyTorch
Installation
From PyPI:
pip install explainiverse
With PyTorch support (for neural network explanations):
pip install explainiverse[torch]
For development:
git clone https://github.com/jemsbhai/explainiverse.git
cd explainiverse
poetry install
Quick Start
Using the Registry (Recommended)
from explainiverse import default_registry, SklearnAdapter
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
# Train a model
iris = load_iris()
model = RandomForestClassifier().fit(iris.data, iris.target)
adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())
# List available explainers
print(default_registry.list_explainers())
# ['lime', 'shap', 'treeshap', 'anchors', 'counterfactual', 'permutation_importance', 'partial_dependence', 'ale', 'sage']
# Create and use an explainer
explainer = default_registry.create(
"lime",
model=adapter,
training_data=iris.data,
feature_names=iris.feature_names,
class_names=iris.target_names.tolist()
)
explanation = explainer.explain(iris.data[0])
print(explanation.explanation_data["feature_attributions"])
Filter Explainers by Criteria
# Find local explainers for tabular data
local_tabular = default_registry.filter(scope="local", data_type="tabular")
print(local_tabular) # ['lime', 'shap', 'treeshap', 'anchors', 'counterfactual']
# Find explainers optimized for tree models
tree_explainers = default_registry.filter(model_type="tree")
print(tree_explainers) # ['treeshap']
# Get recommendations
recommendations = default_registry.recommend(
model_type="any",
data_type="tabular",
scope_preference="local"
)
TreeSHAP for Tree Models (10x+ Faster)
from explainiverse.explainers import TreeShapExplainer
from sklearn.ensemble import RandomForestClassifier
# Train a tree-based model
model = RandomForestClassifier(n_estimators=100).fit(X_train, y_train)
# TreeSHAP works directly with the model (no adapter needed)
explainer = TreeShapExplainer(
model=model,
feature_names=feature_names,
class_names=class_names
)
# Single instance explanation
explanation = explainer.explain(X_test[0])
print(explanation.explanation_data["feature_attributions"])
# Batch explanations (efficient)
explanations = explainer.explain_batch(X_test[:10])
# Feature interactions
interactions = explainer.explain_interactions(X_test[0])
print(interactions.explanation_data["interaction_matrix"])
PyTorch Adapter for Neural Networks
from explainiverse import PyTorchAdapter
import torch.nn as nn
# Define a PyTorch model
model = nn.Sequential(
nn.Linear(10, 64),
nn.ReLU(),
nn.Linear(64, 3)
)
# Wrap with adapter
adapter = PyTorchAdapter(
model,
task="classification",
class_names=["cat", "dog", "bird"]
)
# Use with any explainer
predictions = adapter.predict(X) # Returns numpy array
# Get gradients for attribution methods
predictions, gradients = adapter.predict_with_gradients(X)
# Access intermediate layers
activations = adapter.get_layer_output(X, layer_name="0")
Using Specific Explainers
# Anchors - Rule-based explanations
from explainiverse.explainers import AnchorsExplainer
anchors = AnchorsExplainer(
model=adapter,
training_data=X_train,
feature_names=feature_names,
class_names=class_names
)
explanation = anchors.explain(instance)
print(explanation.explanation_data["rules"])
# ['petal length (cm) > 2.45', 'petal width (cm) <= 1.75']
# Counterfactual - What-if explanations
from explainiverse.explainers import CounterfactualExplainer
cf = CounterfactualExplainer(
model=adapter,
training_data=X_train,
feature_names=feature_names
)
explanation = cf.explain(instance, num_counterfactuals=3)
print(explanation.explanation_data["changes"])
# SAGE - Global Shapley importance
from explainiverse.explainers import SAGEExplainer
sage = SAGEExplainer(
model=adapter,
X=X_train,
y=y_train,
feature_names=feature_names
)
explanation = sage.explain()
print(explanation.explanation_data["feature_attributions"])
Explanation Suite (Multi-Explainer Comparison)
from explainiverse import ExplanationSuite
suite = ExplanationSuite(
model=adapter,
explainer_configs=[
("lime", {"training_data": X_train, "feature_names": feature_names, "class_names": class_names}),
("shap", {"background_data": X_train[:50], "feature_names": feature_names, "class_names": class_names}),
]
)
results = suite.run(instance)
suite.compare()
Registering Custom Explainers
from explainiverse import ExplainerRegistry, ExplainerMeta, BaseExplainer
@default_registry.register_decorator(
name="my_explainer",
meta=ExplainerMeta(
scope="local",
model_types=["any"],
data_types=["tabular"],
description="My custom explainer",
paper_reference="Author et al., 2024"
)
)
class MyExplainer(BaseExplainer):
def explain(self, instance, **kwargs):
# Your implementation
return Explanation(...)
Running Tests
# Run all tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=explainiverse
# Run specific test file
poetry run pytest tests/test_new_explainers.py -v
Roadmap
- LIME, SHAP (KernelSHAP)
- TreeSHAP (optimized for tree models) ✅ NEW
- Anchors, Counterfactuals
- Permutation Importance, PDP, ALE, SAGE
- Explainer Registry with filtering
- PyTorch Adapter ✅ NEW
- Integrated Gradients (gradient-based for neural nets)
- GradCAM for CNNs
- TensorFlow adapter
- Interactive visualization dashboard
Citation
If you use Explainiverse in your research, please cite:
@software{explainiverse2024,
title = {Explainiverse: A Unified Framework for Explainable AI},
author = {Syed, Muntaser},
year = {2024},
url = {https://github.com/jemsbhai/explainiverse}
}
License
MIT License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file explainiverse-0.2.2.tar.gz.
File metadata
- Download URL: explainiverse-0.2.2.tar.gz
- Upload date:
- Size: 36.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
016df480d261c678cf9b8457ef89e34a7d04b17594c0e8d80e830b76e383fa5e
|
|
| MD5 |
ce0b2a208c97502b2c0f66603bb4b20b
|
|
| BLAKE2b-256 |
630047ff40afeab0d453c1e7fa381db9572bd26542de410fb00295e2d080bd1d
|
File details
Details for the file explainiverse-0.2.2-py3-none-any.whl.
File metadata
- Download URL: explainiverse-0.2.2-py3-none-any.whl
- Upload date:
- Size: 46.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6221c743f76b6ae303d8d15b0c3cb01cb8412300dfeaab2f799d16fc3c1ded2b
|
|
| MD5 |
9839c2c2e45c7235a99129332dceb7cc
|
|
| BLAKE2b-256 |
6c3dfc33f2526342288839bb7a02705a116f4718c2375b64986518b7eb9e3fdf
|