Skip to main content

End-to-end ML fairness library: bias detection, fairness-aware training, calibration, threshold optimization, evaluation metrics with confidence intervals, drift monitoring, CI/CD gates, and A/B testing for fairness interventions.

Project description

vfairness — AI Fairness Assessment Library

vfairness is a production-ready Python library that provides end-to-end fairness tooling across the entire ML pipeline — from pre-training bias detection through post-deployment monitoring. It helps data scientists and ML engineers measure whether models treat different groups fairly, identify sources of bias, apply mitigation strategies, and continuously monitor fairness in production.

This is a placeholder release to reserve the package name on PyPI. The first functional release is coming soon.

Modules

Preprocessing

  • Bias Detection — Historical pattern matching (43+ risk patterns across US, EU, and Swiss jurisdictions), representation bias analysis, statistical disparity testing, and proxy variable identification.
  • Feature Engineering — Fairness-aware feature transformations including correlation reduction, feature suppression, residual methods, and intersectional balancing.

In-Processing

  • Loss Functions — PyTorch-based fairness-aware losses: demographic parity, equalized odds, equal opportunity, adversarial debiasing, and counterfactual fairness.
  • Constraints — Training-time constraint enforcement via exponentiated gradient, grid search, and threshold optimization.
  • Regularizers — Statistical parity and Hilbert-Schmidt independence regularization.
  • Wrappers — Scikit-learn compatible FairClassifier and FairRegressor estimators.

Post-Processing

  • Calibration — Group-specific probability calibration (Platt scaling, isotonic regression, beta calibration, temperature scaling, histogram binning) with fairness-calibration trade-off analysis.
  • Threshold Optimization — Single, per-group, and multi-objective threshold optimization under fairness constraints (demographic parity, equalized odds, equal opportunity, predictive parity).
  • Reweighting — Prediction reweighting, rejection option classification, calibrated equalization, and distribution matching.

Evaluation

  • Fairness Metrics — Classification, regression, and ranking metrics with bootstrap and Bayesian confidence intervals, effect sizes, and multiple testing corrections.
  • Intersectional Analysis — Multi-attribute group disparity analysis and automatic discovery of protected attributes and proxy features.
  • FairExplAIner — Plain-language explanations of fairness metrics with actionable recommendations.
  • Visualization — Plotly-based dashboards, radar charts, heatmaps, and confidence interval plots.
  • Robustness Testing — Permutation tests, sensitivity analysis, stress testing, and subgroup audits.
  • MLOps Integration — Native logging to MLflow and Weights & Biases, plus pytest-style fairness assertions.

Operations

  • CI/CD — Pre-training data validation, model fairness deployment gates, hierarchical checking, pytest integration, and pre-commit hooks.
  • Monitoring — Sliding-window fairness tracking, multi-scale drift detection (MMD), adaptive thresholds, and prioritized alerting.
  • Reporting — Tiered report generation (executive, operational, technical) in HTML, Markdown, and JSON with interactive Dash dashboards.
  • Experimentation — A/B testing framework for fairness interventions with power analysis, sequential testing (SPRT), multi-objective Pareto analysis, and causal decomposition.

Rendering

  • SVG template engine with Jinja2 for generating publication-quality fairness report visuals.

Installation

pip install vfairness

Optional extras:

pip install vfairness[viz]          # matplotlib, seaborn
pip install vfairness[dashboard]    # plotly
pip install vfairness[training]     # pytorch
pip install vfairness[mlops]        # mlflow, wandb
pip install vfairness[all]          # everything

Quick Start

from vfairness.evaluation import FairnessAnalyzer

analyzer = FairnessAnalyzer(
    y_true=y_test,
    y_pred=y_pred,
    sensitive_features=df_test["gender"],
)
report = analyzer.evaluate()

Requirements

  • Python 3.9+
  • NumPy, Pandas, SciPy (core)

License

MIT License — see LICENSE for details.

Links

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vfairness-0.0.1.tar.gz (3.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vfairness-0.0.1-py3-none-any.whl (4.5 kB view details)

Uploaded Python 3

File details

Details for the file vfairness-0.0.1.tar.gz.

File metadata

  • Download URL: vfairness-0.0.1.tar.gz
  • Upload date:
  • Size: 3.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for vfairness-0.0.1.tar.gz
Algorithm Hash digest
SHA256 6ee42b298d92a56cabaa25de4d86eb5646660351a84a69cdc46fc4a808203af3
MD5 e696d7d1f88386b89e3a893ac592c93d
BLAKE2b-256 d108d6ea8255da868e798d04003e7164d731b115d1f6800a7b21902243b3fab0

See more details on using hashes here.

File details

Details for the file vfairness-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: vfairness-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 4.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for vfairness-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3d1161592c02475ca88c256822c3bfd6b3dbfcb41394f0a7d4dedcf14f5c6a77
MD5 af911af5845796dd7f66a893d759bac2
BLAKE2b-256 ce84a2659daa16ba7a282478c420c1c7b81783a6523793a42ee4d075a2b8f6a8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page