Skip to main content

A comprehensive toolkit for evaluating and improving AI model fairness and explainability

Project description

AI Fairness and Explainability Toolkit

Python Version License: MIT Build Status Code style: black PyPI DOI

๐ŸŒŸ Overview

The AI Fairness and Explainability Toolkit is an open-source platform designed to evaluate, visualize, and improve AI models with a focus on fairness, explainability, and ethical considerations. Unlike traditional benchmarking tools that focus primarily on performance metrics, this toolkit helps developers understand and mitigate bias, explain model decisions, and ensure ethical AI deployment.

๐ŸŽฏ Mission

To democratize ethical AI development by providing tools that make fairness and explainability accessible to all developers, regardless of their expertise in ethics or advanced ML techniques.

โœจ Key Features

  • Comprehensive Fairness Assessment: Evaluate models across different demographic groups using multiple fairness metrics
  • Bias Mitigation: Implement pre-processing, in-processing, and post-processing techniques
  • Interactive Visualization: Explore model behavior with interactive dashboards and plots
  • Model Comparison: Compare multiple models across fairness and performance metrics
  • Explainability Tools: Understand model decisions with various XAI techniques
  • Production-Ready: Easy integration with existing ML workflows
  • Extensible Architecture: Add custom metrics and visualizations

๐Ÿš€ Quick Start

Installation

# Install from PyPI
pip install ai-fairness-toolkit

# Or install from source
pip install git+https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git

Basic Usage

from ai_fairness_toolkit import FairnessAnalyzer, BiasMitigator, ModelExplainer
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import fetch_openml
import pandas as pd

# Load sample data
data = fetch_openml(data_id=1590, as_frame=True)
X, y = data.data, data.target

# Initialize analyzer
analyzer = FairnessAnalyzer(sensitive_features=X['sex'])

# Train a model
model = RandomForestClassifier()
model.fit(X, y)

# Evaluate fairness
results = analyzer.evaluate(model, X, y)
print(results.fairness_metrics)

# Generate interactive report
analyzer.visualize().show()

๐Ÿ—๏ธ Project Structure

ai-fairness-toolkit/
โ”œโ”€โ”€ ai_fairness_toolkit/      # Main package
โ”‚   โ”œโ”€โ”€ core/                 # Core functionality
โ”‚   โ”‚   โ”œโ”€โ”€ metrics/          # Fairness and performance metrics
โ”‚   โ”‚   โ”œโ”€โ”€ bias_mitigation/  # Bias mitigation techniques
โ”‚   โ”‚   โ”œโ”€โ”€ explainers/       # Model explainability tools
โ”‚   โ”‚   โ””โ”€โ”€ visualization/    # Visualization components
โ”‚   โ”œโ”€โ”€ examples/             # Example notebooks
โ”‚   โ””โ”€โ”€ utils/                # Utility functions
โ”œโ”€โ”€ tests/                    # Test suite
โ”œโ”€โ”€ docs/                     # Documentation
โ”œโ”€โ”€ examples/                 # Example scripts
โ””โ”€โ”€ scripts/                  # Utility scripts

๐Ÿ› ๏ธ Technology Stack

  • Core: Python 3.8+
  • ML Frameworks: scikit-learn, TensorFlow, PyTorch
  • Visualization: Plotly, Matplotlib, Seaborn
  • Testing: pytest, pytest-cov
  • Documentation: Sphinx, ReadTheDocs
  • CI/CD: GitHub Actions

๐Ÿ“š Documentation

For detailed documentation, please visit ai-fairness-toolkit.readthedocs.io.

๐Ÿค How to Contribute

We welcome contributions from the community! Here's how you can help:

  1. Report bugs: Submit issues on GitHub <<<<<<< HEAD
  2. Fix issues: Check out the good first issues =======
  3. Fix issues: Check out the good first issues

efb3c82aa74411c60ac4c0c280c3bc35156e58fc

  1. Add features: Implement new metrics or visualizations
  2. Improve docs: Help enhance our documentation
  3. Share feedback: Let us know how you're using the toolkit

Development Setup

# Clone the repository
git clone https://github.com/TaimoorKhan10/AI-Fairness-Explainability-Toolkit.git
cd AI-Fairness-Explainability-Toolkit

# Create and activate virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install development dependencies
pip install -e .[dev]

# Run tests
pytest

Code Style

We use Black for code formatting and flake8 for linting. Please ensure your code passes both before submitting a PR.

# Auto-format code
black .

# Run linter
flake8

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ“š References

๐Ÿ“ฌ Contact

<<<<<<< HEAD For questions or feedback, please open an issue on our GitHub repository.

For questions or feedback, please open an issue or contact taimoorkhaniajaznabi2@gmail.com

efb3c82aa74411c60ac4c0c280c3bc35156e58fc

๐Ÿค Contributors

This project follows the all-contributors specification. Contributions of any kind welcome!

๐Ÿ—บ๏ธ Roadmap

  • Phase 1: Core fairness metrics and basic explainability tools
  • Phase 2: Interactive dashboards and visualization components
  • Phase 3: Advanced mitigation strategies and customizable metrics
  • Phase 4: Integration with CI/CD pipelines and MLOps workflows
  • Phase 5: Domain-specific extensions for healthcare, finance, etc.

๐Ÿ“œ License

MIT License


AFET is currently in development. We're looking for contributors and early adopters to help shape the future of ethical AI evaluation!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_fairness_toolkit-0.1.1.tar.gz (43.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_fairness_toolkit-0.1.1-py3-none-any.whl (56.7 kB view details)

Uploaded Python 3

File details

Details for the file ai_fairness_toolkit-0.1.1.tar.gz.

File metadata

  • Download URL: ai_fairness_toolkit-0.1.1.tar.gz
  • Upload date:
  • Size: 43.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.9

File hashes

Hashes for ai_fairness_toolkit-0.1.1.tar.gz
Algorithm Hash digest
SHA256 7b9facda9de9f26e17179942aff9fc29ef299602c5787e9b394958f5307db646
MD5 38c45803d3b5bf6bc17789e2e67da9d1
BLAKE2b-256 7c7aee546c3cd21a175f0468ea124b1f56026b13ec0aebf0624da48e287d1c5b

See more details on using hashes here.

File details

Details for the file ai_fairness_toolkit-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_fairness_toolkit-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 53db17549acc185329abad32a05c699b59ee58beb9256cd606000159244e2592
MD5 29bfe9f569b545d53030b24253443574
BLAKE2b-256 f9bb67e5a20acbe4fcfafcfb85082c6fa0f1d1dcc3fe0df836d7c545ca2d3df9

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page