Skip to main content

Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more

Project description

Explainiverse

Explainiverse is a unified, extensible Python framework for Explainable AI (XAI).
It provides a standardized interface for model-agnostic explainability with 10 state-of-the-art XAI methods, evaluation metrics, and a plugin registry for easy extensibility.


Features

🎯 Comprehensive XAI Coverage

Local Explainers (instance-level explanations):

Global Explainers (model-level explanations):

🔌 Extensible Plugin Registry

  • Register custom explainers with rich metadata
  • Filter by scope (local/global), model type, data type
  • Automatic recommendations based on use case

📊 Evaluation Metrics

  • AOPC (Area Over Perturbation Curve)
  • ROAR (Remove And Retrain)
  • Multiple baseline options and curve generation

🧪 Standardized Interface

  • Consistent BaseExplainer API
  • Unified Explanation output format
  • Model adapters for sklearn and PyTorch

Installation

From PyPI:

pip install explainiverse

With PyTorch support (for neural network explanations):

pip install explainiverse[torch]

For development:

git clone https://github.com/jemsbhai/explainiverse.git
cd explainiverse
poetry install

Quick Start

Using the Registry (Recommended)

from explainiverse import default_registry, SklearnAdapter
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

# Train a model
iris = load_iris()
model = RandomForestClassifier().fit(iris.data, iris.target)
adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())

# List available explainers
print(default_registry.list_explainers())
# ['lime', 'shap', 'treeshap', 'integrated_gradients', 'anchors', 'counterfactual', 'permutation_importance', 'partial_dependence', 'ale', 'sage']

# Create and use an explainer
explainer = default_registry.create(
    "lime",
    model=adapter,
    training_data=iris.data,
    feature_names=iris.feature_names,
    class_names=iris.target_names.tolist()
)
explanation = explainer.explain(iris.data[0])
print(explanation.explanation_data["feature_attributions"])

Filter Explainers by Criteria

# Find local explainers for tabular data
local_tabular = default_registry.filter(scope="local", data_type="tabular")
print(local_tabular)  # ['lime', 'shap', 'treeshap', 'integrated_gradients', 'anchors', 'counterfactual']

# Find explainers optimized for tree models
tree_explainers = default_registry.filter(model_type="tree")
print(tree_explainers)  # ['treeshap']

# Get recommendations
recommendations = default_registry.recommend(
    model_type="any",
    data_type="tabular",
    scope_preference="local"
)

TreeSHAP for Tree Models (10x+ Faster)

from explainiverse.explainers import TreeShapExplainer
from sklearn.ensemble import RandomForestClassifier

# Train a tree-based model
model = RandomForestClassifier(n_estimators=100).fit(X_train, y_train)

# TreeSHAP works directly with the model (no adapter needed)
explainer = TreeShapExplainer(
    model=model,
    feature_names=feature_names,
    class_names=class_names
)

# Single instance explanation
explanation = explainer.explain(X_test[0])
print(explanation.explanation_data["feature_attributions"])

# Batch explanations (efficient)
explanations = explainer.explain_batch(X_test[:10])

# Feature interactions
interactions = explainer.explain_interactions(X_test[0])
print(interactions.explanation_data["interaction_matrix"])

PyTorch Adapter for Neural Networks

from explainiverse import PyTorchAdapter
import torch.nn as nn

# Define a PyTorch model
model = nn.Sequential(
    nn.Linear(10, 64),
    nn.ReLU(),
    nn.Linear(64, 3)
)

# Wrap with adapter
adapter = PyTorchAdapter(
    model,
    task="classification",
    class_names=["cat", "dog", "bird"]
)

# Use with any explainer
predictions = adapter.predict(X)  # Returns numpy array

# Get gradients for attribution methods
predictions, gradients = adapter.predict_with_gradients(X)

# Access intermediate layers
activations = adapter.get_layer_output(X, layer_name="0")

Integrated Gradients for Neural Networks

from explainiverse.explainers import IntegratedGradientsExplainer
from explainiverse import PyTorchAdapter

# Wrap your PyTorch model
adapter = PyTorchAdapter(model, task="classification", class_names=class_names)

# Create IG explainer
explainer = IntegratedGradientsExplainer(
    model=adapter,
    feature_names=feature_names,
    class_names=class_names,
    n_steps=50  # More steps = more accurate
)

# Explain a prediction
explanation = explainer.explain(X_test[0])
print(explanation.explanation_data["feature_attributions"])

# Check convergence (sum of attributions ≈ F(x) - F(baseline))
explanation = explainer.explain(X_test[0], return_convergence_delta=True)
print(f"Convergence delta: {explanation.explanation_data['convergence_delta']}")

Using Specific Explainers

# Anchors - Rule-based explanations
from explainiverse.explainers import AnchorsExplainer

anchors = AnchorsExplainer(
    model=adapter,
    training_data=X_train,
    feature_names=feature_names,
    class_names=class_names
)
explanation = anchors.explain(instance)
print(explanation.explanation_data["rules"])
# ['petal length (cm) > 2.45', 'petal width (cm) <= 1.75']

# Counterfactual - What-if explanations
from explainiverse.explainers import CounterfactualExplainer

cf = CounterfactualExplainer(
    model=adapter,
    training_data=X_train,
    feature_names=feature_names
)
explanation = cf.explain(instance, num_counterfactuals=3)
print(explanation.explanation_data["changes"])

# SAGE - Global Shapley importance
from explainiverse.explainers import SAGEExplainer

sage = SAGEExplainer(
    model=adapter,
    X=X_train,
    y=y_train,
    feature_names=feature_names
)
explanation = sage.explain()
print(explanation.explanation_data["feature_attributions"])

Explanation Suite (Multi-Explainer Comparison)

from explainiverse import ExplanationSuite

suite = ExplanationSuite(
    model=adapter,
    explainer_configs=[
        ("lime", {"training_data": X_train, "feature_names": feature_names, "class_names": class_names}),
        ("shap", {"background_data": X_train[:50], "feature_names": feature_names, "class_names": class_names}),
    ]
)

results = suite.run(instance)
suite.compare()

Registering Custom Explainers

from explainiverse import ExplainerRegistry, ExplainerMeta, BaseExplainer

@default_registry.register_decorator(
    name="my_explainer",
    meta=ExplainerMeta(
        scope="local",
        model_types=["any"],
        data_types=["tabular"],
        description="My custom explainer",
        paper_reference="Author et al., 2024"
    )
)
class MyExplainer(BaseExplainer):
    def explain(self, instance, **kwargs):
        # Your implementation
        return Explanation(...)

Running Tests

# Run all tests
poetry run pytest

# Run with coverage
poetry run pytest --cov=explainiverse

# Run specific test file
poetry run pytest tests/test_new_explainers.py -v

Roadmap

  • LIME, SHAP (KernelSHAP)
  • TreeSHAP (optimized for tree models) ✅
  • Anchors, Counterfactuals
  • Permutation Importance, PDP, ALE, SAGE
  • Explainer Registry with filtering
  • PyTorch Adapter ✅
  • Integrated Gradients ✅ NEW
  • GradCAM for CNNs
  • TensorFlow adapter
  • Interactive visualization dashboard

Citation

If you use Explainiverse in your research, please cite:

@software{explainiverse2024,
  title = {Explainiverse: A Unified Framework for Explainable AI},
  author = {Syed, Muntaser},
  year = {2024},
  url = {https://github.com/jemsbhai/explainiverse}
}

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

explainiverse-0.2.3.tar.gz (39.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

explainiverse-0.2.3-py3-none-any.whl (50.5 kB view details)

Uploaded Python 3

File details

Details for the file explainiverse-0.2.3.tar.gz.

File metadata

  • Download URL: explainiverse-0.2.3.tar.gz
  • Upload date:
  • Size: 39.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11

File hashes

Hashes for explainiverse-0.2.3.tar.gz
Algorithm Hash digest
SHA256 6cca7f43b8d2137f5d3664a924951d28e89e94aa5debcadeeeeec382776fe7a5
MD5 33cc546112763b5e316ab80daf6082ee
BLAKE2b-256 83c12b9a781d660b8031aa3c7ff58529940ddefcd1571a1053f63efaf8ad4e6d

See more details on using hashes here.

File details

Details for the file explainiverse-0.2.3-py3-none-any.whl.

File metadata

  • Download URL: explainiverse-0.2.3-py3-none-any.whl
  • Upload date:
  • Size: 50.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11

File hashes

Hashes for explainiverse-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 ec37cdf122e97211ce62be6fba17d0670abcf10e05d1a5d0fd54553eb9c8d7ae
MD5 21a38b828618ff208ae8b7f90dfa4440
BLAKE2b-256 7e4ecd92744bbb57a405e82fe94ff8371b79e00dbec9b82eeaec9ccd3be9adde

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page