Skip to main content

A comprehensive toolkit for evaluating and improving AI model fairness and explainability

Project description

AI Fairness and Explainability Toolkit

Python Version License: MIT Build Status Documentation Status Code style: black PyPI version DOI

๐ŸŒŸ Overview

The AI Fairness and Explainability Toolkit is an open-source platform designed to evaluate, visualize, and improve AI models with a focus on fairness, explainability, and ethical considerations. Unlike traditional benchmarking tools that focus primarily on performance metrics, this toolkit helps developers understand and mitigate bias, explain model decisions, and ensure ethical AI deployment.

๐ŸŽฏ Mission

To democratize ethical AI development by providing tools that make fairness and explainability accessible to all developers, regardless of their expertise in ethics or advanced ML techniques.

โœจ Key Features

  • Comprehensive Fairness Assessment: Evaluate models across different demographic groups using multiple fairness metrics
  • Bias Mitigation: Implement pre-processing, in-processing, and post-processing techniques
  • Interactive Visualization: Explore model behavior with interactive dashboards and plots
  • Model Comparison: Compare multiple models across fairness and performance metrics
  • Explainability Tools: Understand model decisions with various XAI techniques
  • Production-Ready: Easy integration with existing ML workflows
  • Extensible Architecture: Add custom metrics and visualizations

๐Ÿš€ Quick Start

Installation

# Install from PyPI
pip install ai-fairness-toolkit

# Or install from source
pip install git+https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git

Basic Usage

from ai_fairness_toolkit import FairnessAnalyzer, BiasMitigator, ModelExplainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import fetch_openml
import pandas as pd

# Load sample data
data = fetch_openml(data_id=1590, as_frame=True)
X, y = data.data, data.target

# Initialize analyzer
analyzer = FairnessAnalyzer(sensitive_features=X['sex'])

# Train a model
model = RandomForestClassifier()
model.fit(X, y)

# Evaluate fairness
results = analyzer.evaluate(model, X, y)
print(results.fairness_metrics)

# Generate interactive report
analyzer.visualize().show()

๐Ÿ—๏ธ Project Structure

ai-fairness-toolkit/
โ”œโ”€โ”€ ai_fairness_toolkit/      # Main package
โ”‚   โ”œโ”€โ”€ core/                 # Core functionality
โ”‚   โ”‚   โ”œโ”€โ”€ metrics/          # Fairness and performance metrics
โ”‚   โ”‚   โ”œโ”€โ”€ bias_mitigation/  # Bias mitigation techniques
โ”‚   โ”‚   โ”œโ”€โ”€ explainers/       # Model explainability tools
โ”‚   โ”‚   โ””โ”€โ”€ visualization/    # Visualization components
โ”‚   โ”œโ”€โ”€ examples/             # Example notebooks
โ”‚   โ””โ”€โ”€ utils/                # Utility functions
โ”œโ”€โ”€ tests/                    # Test suite
โ”œโ”€โ”€ docs/                     # Documentation
โ”œโ”€โ”€ examples/                 # Example scripts
โ””โ”€โ”€ scripts/                  # Utility scripts

๐Ÿ› ๏ธ Technology Stack

  • Core: Python 3.8+
  • ML Frameworks: scikit-learn, TensorFlow, PyTorch
  • Visualization: Plotly, Matplotlib, Seaborn
  • Testing: pytest, pytest-cov
  • Documentation: Sphinx, ReadTheDocs
  • CI/CD: GitHub Actions

๐Ÿ“š Documentation

For detailed documentation, please visit ai-fairness-toolkit.readthedocs.io.

๐Ÿค How to Contribute

We welcome contributions from the community! Here's how you can help:

  1. Report bugs: Submit issues on GitHub <<<<<<< HEAD
  2. Fix issues: Check out the good first issues =======
  3. Fix issues: Check out the good first issues

efb3c82aa74411c60ac4c0c280c3bc35156e58fc

  1. Add features: Implement new metrics or visualizations
  2. Improve docs: Help enhance our documentation
  3. Share feedback: Let us know how you're using the toolkit

Development Setup

# Clone the repository
git clone https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
cd AI-Fairness-Explainability-Toolkit

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install development dependencies
pip install -e .[dev]

# Run tests
pytest

Code Style

We use Black for code formatting and flake8 for linting. Please ensure your code passes both before submitting a PR.

# Auto-format code
black .

# Run linter
flake8

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ“š References

๐Ÿ“ฌ Contact

<<<<<<< HEAD For questions or feedback, please open an issue on our GitHub repository.

For questions or feedback, please open an issue or contact taimoorkhaniajaznabi2@gmail.com

efb3c82aa74411c60ac4c0c280c3bc35156e58fc

๐Ÿค Contributors

This project follows the all-contributors specification. Contributions of any kind welcome!

๐Ÿ—บ๏ธ Roadmap

  • Phase 1: Core fairness metrics and basic explainability tools
  • Phase 2: Interactive dashboards and visualization components
  • Phase 3: Advanced mitigation strategies and customizable metrics
  • Phase 4: Integration with CI/CD pipelines and MLOps workflows
  • Phase 5: Domain-specific extensions for healthcare, finance, etc.

๐Ÿ“œ License

MIT License


AFET is currently in development. We're looking for contributors and early adopters to help shape the future of ethical AI evaluation!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_fairness_toolkit-0.1.0.tar.gz (44.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_fairness_toolkit-0.1.0-py3-none-any.whl (56.7 kB view details)

Uploaded Python 3

File details

Details for the file ai_fairness_toolkit-0.1.0.tar.gz.

File metadata

  • Download URL: ai_fairness_toolkit-0.1.0.tar.gz
  • Upload date:
  • Size: 44.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.9

File hashes

Hashes for ai_fairness_toolkit-0.1.0.tar.gz
Algorithm Hash digest
SHA256 fda00190789fc4257e535f6b0b7bb8c2edc0333add5320d4afe3e8e4c90556ca
MD5 d3136e2c75fa23d030761f1e8953d4ed
BLAKE2b-256 7b3b454fc58584a69d660fb2dafb0dbfccf3e582a83b7149e877f314fe97935e

See more details on using hashes here.

File details

Details for the file ai_fairness_toolkit-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_fairness_toolkit-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 65d20e9a8203842088b65b33d30c7bd45a1eb355f50f629e17149935ef491261
MD5 7409e72a046bed6829342110a765e014
BLAKE2b-256 216ff7134f1b25ed6b5a1c591bc962555e29b430a1c0e5cf6a1b9646b229e1df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page