Skip to main content

Unified, extensible explainability framework supporting LIME, SHAP, Anchors, Counterfactuals, PDP, ALE, SAGE, and more

Project description

Explainiverse

Explainiverse is a unified, extensible Python framework for Explainable AI (XAI).
It provides a standardized interface for model-agnostic explainability with 8 state-of-the-art XAI methods, evaluation metrics, and a plugin registry for easy extensibility.


Features

🎯 Comprehensive XAI Coverage

Local Explainers (instance-level explanations):

Global Explainers (model-level explanations):

🔌 Extensible Plugin Registry

  • Register custom explainers with rich metadata
  • Filter by scope (local/global), model type, data type
  • Automatic recommendations based on use case

📊 Evaluation Metrics

  • AOPC (Area Over Perturbation Curve)
  • ROAR (Remove And Retrain)
  • Multiple baseline options and curve generation

🧪 Standardized Interface

  • Consistent BaseExplainer API
  • Unified Explanation output format
  • Model adapters for sklearn and more

Installation

From PyPI:

pip install explainiverse

For development:

git clone https://github.com/jemsbhai/explainiverse.git
cd explainiverse
poetry install

Quick Start

Using the Registry (Recommended)

from explainiverse import default_registry, SklearnAdapter
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris

# Train a model
iris = load_iris()
model = RandomForestClassifier().fit(iris.data, iris.target)
adapter = SklearnAdapter(model, class_names=iris.target_names.tolist())

# List available explainers
print(default_registry.list_explainers())
# ['lime', 'shap', 'anchors', 'counterfactual', 'permutation_importance', 'partial_dependence', 'ale', 'sage']

# Create and use an explainer
explainer = default_registry.create(
    "lime",
    model=adapter,
    training_data=iris.data,
    feature_names=iris.feature_names,
    class_names=iris.target_names.tolist()
)
explanation = explainer.explain(iris.data[0])
print(explanation.explanation_data["feature_attributions"])

Filter Explainers by Criteria

# Find local explainers for tabular data
local_tabular = default_registry.filter(scope="local", data_type="tabular")
print(local_tabular)  # ['lime', 'shap', 'anchors', 'counterfactual']

# Find global explainers
global_explainers = default_registry.filter(scope="global")
print(global_explainers)  # ['permutation_importance', 'partial_dependence', 'ale', 'sage']

# Get recommendations
recommendations = default_registry.recommend(
    model_type="any",
    data_type="tabular",
    scope_preference="local"
)

Using Specific Explainers

# Anchors - Rule-based explanations
from explainiverse.explainers import AnchorsExplainer

anchors = AnchorsExplainer(
    model=adapter,
    training_data=X_train,
    feature_names=feature_names,
    class_names=class_names
)
explanation = anchors.explain(instance)
print(explanation.explanation_data["rules"])
# ['petal length (cm) > 2.45', 'petal width (cm) <= 1.75']

# Counterfactual - What-if explanations
from explainiverse.explainers import CounterfactualExplainer

cf = CounterfactualExplainer(
    model=adapter,
    training_data=X_train,
    feature_names=feature_names
)
explanation = cf.explain(instance, num_counterfactuals=3)
print(explanation.explanation_data["changes"])

# SAGE - Global Shapley importance
from explainiverse.explainers import SAGEExplainer

sage = SAGEExplainer(
    model=adapter,
    X=X_train,
    y=y_train,
    feature_names=feature_names
)
explanation = sage.explain()
print(explanation.explanation_data["feature_attributions"])

Explanation Suite (Multi-Explainer Comparison)

from explainiverse import ExplanationSuite

suite = ExplanationSuite(
    model=adapter,
    explainer_configs=[
        ("lime", {"training_data": X_train, "feature_names": feature_names, "class_names": class_names}),
        ("shap", {"background_data": X_train[:50], "feature_names": feature_names, "class_names": class_names}),
    ]
)

results = suite.run(instance)
suite.compare()

Registering Custom Explainers

from explainiverse import ExplainerRegistry, ExplainerMeta, BaseExplainer

@default_registry.register_decorator(
    name="my_explainer",
    meta=ExplainerMeta(
        scope="local",
        model_types=["any"],
        data_types=["tabular"],
        description="My custom explainer",
        paper_reference="Author et al., 2024"
    )
)
class MyExplainer(BaseExplainer):
    def explain(self, instance, **kwargs):
        # Your implementation
        return Explanation(...)

Running Tests

# Run all tests
poetry run pytest

# Run with coverage
poetry run pytest --cov=explainiverse

# Run specific test file
poetry run pytest tests/test_new_explainers.py -v

Roadmap

  • LIME, SHAP (KernelSHAP)
  • Anchors, Counterfactuals
  • Permutation Importance, PDP, ALE, SAGE
  • Explainer Registry with filtering
  • TreeSHAP (optimized for tree models)
  • Integrated Gradients (gradient-based for neural nets)
  • PyTorch/TensorFlow adapters
  • Interactive visualization dashboard

Citation

If you use Explainiverse in your research, please cite:

@software{explainiverse2024,
  title = {Explainiverse: A Unified Framework for Explainable AI},
  author = {Syed, Muntaser},
  year = {2024},
  url = {https://github.com/jemsbhai/explainiverse}
}

License

MIT License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

explainiverse-0.2.0.tar.gz (28.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

explainiverse-0.2.0-py3-none-any.whl (37.3 kB view details)

Uploaded Python 3

File details

Details for the file explainiverse-0.2.0.tar.gz.

File metadata

  • Download URL: explainiverse-0.2.0.tar.gz
  • Upload date:
  • Size: 28.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11

File hashes

Hashes for explainiverse-0.2.0.tar.gz
Algorithm Hash digest
SHA256 cf62a263bd9ce32071a00ae9a048c40489e39c5aaf0a7f0a4aeac460a1346184
MD5 79c85f266005010d93b038e653613ddc
BLAKE2b-256 d08fc64c479a2bcafff738b14294d5bd423bb905160c4b3feb96d0adcb8a0be6

See more details on using hashes here.

File details

Details for the file explainiverse-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: explainiverse-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 37.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.12.2 Windows/11

File hashes

Hashes for explainiverse-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4d015409f4bea3360052b1e92fc80ca3e5fdbe153a504a3e173a94a8edebd392
MD5 52ab0f04253a1fc6ca1d0f2854169b17
BLAKE2b-256 3ab42359109d43a5026fab80313581e446d98d204665f92aee4696f11654a02c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page