Skip to main content

The bias and fairness audit toolkit.

Project description

Aequitas: Bias Auditing & Fair ML Toolkit

PyPI download month Code style: black

aequitas is an open-source bias auditing and Fair ML toolkit for data scientists, machine learning researchers, and policymakers. The objective of this package is to provide an easy-to-use and transparent tool for auditing predictors, as well as experimenting with Fair ML methods in binary classification settings.

📥 Installation

pip install aequitas

or

pip install git+https://github.com/dssg/aequitas.git

🔍 Quickstart on Bias Auditing

To perform a bias audit, you need a pandas DataFrame with the following format:

label score sens_attr_1 sens_attr_2 ... sens_attr_N
0 0 0 A F Y
1 0 1 C F N
2 1 1 B T N
...
N 1 0 E T Y

where label is the target variable for your prediction task and score is the model output. Only one sensitive attribute is required; all must be in Categorical format.

from aequitas import Audit

audit = Audit(df)

To obtain a summary of the bias audit, run:

# Select the fairness metric of interest for your dataset
audit.summary_plot(["tpr", "fpr", "pprev"])

We can also observe a single metric and sensitive attribute:

audit.disparity_plot(attribute="sens_attr_2", metrics=["fpr"])

🧪 Quickstart on Fair ML Experimenting

To perform an experiment, a dataset is required. It must have a label column, a sensitive attribute column, and features.

from aequitas.flow import DefaultExperiment

experiment = DefaultExperiment(dataset, label="label", s="sensitive_attribute")
experiment.run()

Several aspects of an experiment (e.g., algorithms, number of runs, dataset splitting) can be configured individually.

🧠 Quickstart on Method Training

Assuming an aequitas.flow.Dataset, it is possible to train methods and use their functionality depending on the type of algorithm (pre-, in-, or post-processing).

For pre-processing methods:

from aequitas.flow.methods.preprocessing import PrevalenceSampling

sampler = PrevalenceSampling()
sampler.fit(dataset.train.X, dataset.train.y, dataset.train.s)
X_sample, y_sample, s_sample = sampler.transform(dataset.train.X, dataset.train.y, dataset.train.s)

for in-processing methods:

from aequitas.flow.methods.inprocessing import FairGBM

model = FairGBM()
model.fit(X_sample, y_sample, s_sample)
scores_val = model.predict_proba(dataset.validation.X, dataset.validation.y, dataset.validation.s)
scores_test = model.predict_proba(dataset.test.X, dataset.test.y, dataset.test.s)

for post-processing methods:

from aequitas.flow.methods.postprocessing import BalancedGroupThreshold

threshold = BalancedGroupThreshold("top_pct", 0.1, "fpr")
threshold.fit(dataset.validation.X, scores_val, dataset.validation.y, dataset.validation.s)
corrected_scores = threshold.transform(dataset.test.X, scores_test, dataset.test.s)

With this sequence, we would sample a dataset, train a FairGBM model, and then adjust the scores to have equal FPR per group (achieving Predictive Equality).

📜 Features of the Toolkit

  • Metrics: Audits based on confusion matrix-based metrics with flexibility to select the more important ones depending on use-case.
  • Plotting options: The major outcomes of bias auditing and experimenting offer also plots adequate to different user objectives.
  • Fair ML methods: Interface and implementation of several Fair ML methods, including pre-, in-, and post-processing methods.
  • Datasets: Two "families" of datasets included, named BankAccountFraud and FolkTables.
  • Extensibility: Adapted to receive user-implemented methods, with intuitive interfaces and method signatures.
  • Reproducibility: Option to save artifacts of Experiments, from the transformed data to the fitted models and predictions.
  • Modularity: Fair ML Methods and default datasets can be used individually or integrated in an Experiment.
  • Hyperparameter optimization: Out of the box integration and abstraction of Optuna's hyperparameter optimization capabilities for experimentation.

Fairness Metrics

aequitas provides the value of confusion matrix metrics (referred as $\text{CM}$) for each possible value of the sensitive attribute columns. To calculate fairness metrics, ratios between two groups are calculated. We provide an example of how the Audit class operates to obtain the metrics:

Operation Result
Calculate $\text{CM}$ for every group Dataframe with confusion matrix metrics $\text{CM}_a, \text{CM}_b, ..., \text{CM}_N$.
Selecting the reference group Either majority group, group with min metric or user-selected, $\text{CM}_{ref}$.
Calculating disparities Dataframe with ratios between each group and the reference group, $\frac{\text{CM}a}{\text{CM}{ref}}, \frac{\text{CM}b}{\text{CM}{ref}}, ..., \frac{\text{CM}N}{\text{CM}{ref}}$
Selecting the metric(s) of interest Summaries, plots, or tables of the results.

Use Cases and examples

Use Case Description
Auditing Predictions Check how to do an in-depth bias audit with the COMPAS example notebook.
Auditing and correcting a trained model Create a dataframe to audit a specific model, and correct the predictions with group-specific thresholds in the Model correction notebook.
Running a Fair ML Experiment Experiment with your own dataset or methods and check the results of a Fair ML experiment.

Further documentation

You can find the toolkit documentation here.

For more examples of the python library and a deep dive on concepts of fairness in ML, see our Tutorial presented on KDD and AAAI. Visit also the Aequitas project website.

Citing Aequitas

If you use Aequitas in a scientific publication, we would appreciate citations to the following paper:

Pedro Saleiro, Benedict Kuester, Abby Stevens, Ari Anisfeld, Loren Hinkson, Jesse London, Rayid Ghani, Aequitas: A Bias and Fairness Audit Toolkit, arXiv preprint arXiv:1811.05577 (2018). (PDF)

   @article{2018aequitas,
     title={Aequitas: A Bias and Fairness Audit Toolkit},
     author={Saleiro, Pedro and Kuester, Benedict and Stevens, Abby and Anisfeld, Ari and Hinkson, Loren and London, Jesse and Ghani, Rayid}, journal={arXiv preprint arXiv:1811.05577}, year={2018}}

Back to top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

aequitas-1.0.0.dev0-py3-none-any.whl (3.1 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page