Unified XAI: An Explainable AI library for interpretable machine learning, Deep Learning and Artifical Intelligence.
Project description
Unified XAI
Production-Ready Explainable AI Library for Deep Learning
Documentation | Tutorials | API Reference
๐ฏ Overview
Unified XAI is a comprehensive, production-ready library for explaining deep learning models across multiple frameworks and modalities. It provides a unified API for various explainability methods, making it easy to understand, debug, and improve your AI models.
โจ Key Features
- ๐ Framework Agnostic: Seamless support for PyTorch, TensorFlow, Keras, and ONNX
- ๐ Multiple Modalities: Image, text, tabular, time-series, and multimodal data
- ๐จ Rich Visualizations: Interactive plots, heatmaps, and dashboards
- ๐ Comprehensive Metrics: Faithfulness, stability, complexity evaluations
- โก High Performance: Optimized implementations with caching and parallelization
- ๐ง Production Ready: Type hints, extensive testing, and robust error handling
- ๐ Easy to Use: Simple API with sensible defaults
- ๐ฆ Extensible: Plugin architecture for custom methods
๐ Quick Start
Installation
# Basic installation
pip install unified-xai
# With specific framework support
pip install unified-xai[torch] # PyTorch support
pip install unified-xai[tf] # TensorFlow support
pip install unified-xai[all] # All frameworks
# Development installation
pip install unified-xai[dev]
# With dashboard support
pip install unified-xai[dashboard]
Basic Usage
import torch
from unified_xai import XAIAnalyzer, XAIConfig
from unified_xai.config import Framework, Modality
# Load your model
model = torch.load('your_model.pth')
# Configure XAI
config = XAIConfig(
framework=Framework.PYTORCH,
modality=Modality.IMAGE
)
# Initialize analyzer
analyzer = XAIAnalyzer(model, config)
# Generate explanation
explanation = analyzer.explain(
input_data,
method='integrated_gradients',
target=class_idx
)
# Visualize
fig = analyzer.visualize(explanation, original_input=input_data)
๐ Supported Methods
Gradient-Based Methods
- โ Vanilla Gradient
- โ Integrated Gradients
- โ SmoothGrad
- โ Grad-CAM / Grad-CAM++
- โ Guided Backpropagation
- โ DeepLIFT
Perturbation-Based Methods
- โ LIME (Local Interpretable Model-agnostic Explanations)
- โ SHAP (SHapley Additive exPlanations)
- โ Occlusion Sensitivity
- โ Meaningful Perturbations
Attention-Based Methods
- โ Attention Rollout
- โ Attention Flow
- โ LRP (Layer-wise Relevance Propagation)
Example-Based Methods
- โ Influence Functions
- โ Prototype Selection
- โ Counterfactual Explanations
๐ฏ Use Cases
Computer Vision
# Explain image classification
explanation = analyzer.explain(image, method='gradcam')
# Compare multiple methods
comparison = analyzer.compare_methods(
image,
methods=['gradcam', 'integrated_gradients', 'lime'],
metrics=['faithfulness', 'complexity']
)
Natural Language Processing
# Explain text classification
config = XAIConfig(framework=Framework.PYTORCH, modality=Modality.TEXT)
analyzer = XAIAnalyzer(bert_model, config)
explanation = analyzer.explain(
text_tokens,
method='integrated_gradients'
)
Tabular Data
# Explain tabular predictions
config = XAIConfig(modality=Modality.TABULAR)
analyzer = XAIAnalyzer(model, config)
explanation = analyzer.explain(
tabular_data,
method='shap',
background_data=train_data
)
๐ Evaluation Metrics
Unified XAI provides comprehensive metrics to evaluate explanation quality:
# Evaluate explanation
metrics = analyzer.evaluator.evaluate(
explanation,
input_data,
metrics=['faithfulness', 'stability', 'complexity', 'sensitivity']
)
# Compare methods quantitatively
rankings = analyzer.compare_methods(
input_data,
methods=['gradcam', 'lime', 'shap'],
metrics=['faithfulness', 'stability']
)
๐จ Visualization Dashboard
Launch interactive dashboard for exploration:
# Command line
unified-xai dashboard --model path/to/model --port 8080
# Or in Python
from unified_xai.dashboard import launch_dashboard
launch_dashboard(model, port=8080)
๐๏ธ Architecture
unified-xai/
โโโ core/ # Core abstractions and base classes
โโโ methods/ # Explanation method implementations
โ โโโ gradient/ # Gradient-based methods
โ โโโ perturbation/ # Perturbation-based methods
โ โโโ attention/ # Attention-based methods
โ โโโ example/ # Example-based methods
โโโ frameworks/ # Framework-specific adapters
โโโ visualization/ # Visualization utilities
โโโ metrics/ # Evaluation metrics
โโโ utils/ # Helper utilities
โโโ dashboard/ # Web dashboard
๐ง Configuration
Unified XAI supports various configuration options:
# From file
config = XAIConfig.from_file('config.yaml')
# Programmatic
config = XAIConfig(
framework=Framework.PYTORCH,
modality=Modality.IMAGE,
gradient_config={
'normalize': True,
'smooth_samples': 50
},
visualization_config={
'cmap': 'RdBu_r',
'overlay': True
}
)
Example config.yaml:
framework: pytorch
modality: image
batch_size: 32
device: cuda
gradient_config:
normalize: true
smooth_samples: 50
lime_config:
num_samples: 1000
num_features: 10
visualization_config:
cmap: RdBu_r
alpha: 0.7
๐งช Testing
# Run all tests
pytest
# Run with coverage
pytest --cov=unified_xai
# Run specific test module
pytest tests/test_methods.py
# Run benchmarks
pytest tests/benchmarks/ --benchmark-only
๐ Documentation
Full documentation is available at https://unified-xai.readthedocs.io
๐ค Contributing
We welcome contributions! Please see our Contributing Guide for details.
# Setup development environment
git clone https://github.com/yourusername/unified-xai.git
cd unified-xai
pip install -e ".[dev]"
pre-commit install
# Run checks before committing
make lint
make test
make docs
๐ Benchmarks
Performance comparisons across different methods and frameworks:
| Method | PyTorch (ms) | TensorFlow (ms) | Accuracy |
|---|---|---|---|
| Integrated Gradients | 45 | 52 | 0.94 |
| LIME | 890 | 920 | 0.89 |
| SHAP | 340 | 380 | 0.91 |
| Grad-CAM | 23 | 28 | 0.87 |
๐ Citation
If you use Unified XAI in your research, please cite:
@software{unified_xai,
title = {Unified XAI: A Production-Ready Explainable AI Library},
author = {Satyam Singh},
year = {2025},
url = {https://github.com/SatyamSingh8306/unified-xai}
}
๐ License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
๐ Acknowledgments
- Thanks to all contributors and the open-source community
- Inspired by Captum, SHAP, LIME, and other XAI libraries
- Supported by [Your Organization]
๐ฌ Contact
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: satyamsingh7734@gmail.com
- Twitter: @unified_xai
๐บ๏ธ Roadmap
- Support for Vision Transformers
- Additional evaluation metrics
- Model-specific explanations
- Distributed computing support
- AutoML integration
- Mobile deployment
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file unified_xai-0.1.1.tar.gz.
File metadata
- Download URL: unified_xai-0.1.1.tar.gz
- Upload date:
- Size: 28.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
740d9119f1d7a4f09a062146841a1869c2f4af3899a0fda85ce98b3c02ac0eab
|
|
| MD5 |
345ff11678a38f12d8857e02d590d661
|
|
| BLAKE2b-256 |
760f27918d9de6f50c1fc13b9049decfa38cb297c91da58a2f09687f4c8e6cab
|
File details
Details for the file unified_xai-0.1.1-py3-none-any.whl.
File metadata
- Download URL: unified_xai-0.1.1-py3-none-any.whl
- Upload date:
- Size: 17.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a43f1116991a89af6603cf546028a5ff7788562b208652a8d727c09f9dbf421f
|
|
| MD5 |
038d2e2b2ff412baa12c8ae406ae2acd
|
|
| BLAKE2b-256 |
c2d85efff8be016a59c6d2b5e582c50540005f258132309a7b0edd66b6656345
|