Achieve error-rate parity between protected groups for any predictor
Project description
error-parity
Fast postprocessing of any score-based predictor to meet fairness criteria.
The error-parity
package can achieve strict or relaxed fairness constraint fulfillment,
which can be useful to compare ML models at equal fairness levels.
Installing
Install package from PyPI:
pip install error-parity
Or, for development, you can clone the repo and install from local sources:
git clone https://github.com/AndreFCruz/error-parity.git
pip install ./error-parity
Getting started
from error_parity import RelaxedEqualOdds
# Given any trained model that outputs real-valued scores
fair_clf = RelaxedEqualOdds(
predictor=lambda X: model.predict_proba(X)[:, -1], # for sklearn API
# predictor=model, # use this for a callable model
tolerance=0.05, # fairness constraint tolerance
)
# Fit the fairness adjustment on some data
# This will find the optimal _fair classifier_
fair_clf.fit(X=X, y=y, group=group)
# Now you can use `fair_clf` as any other classifier
# You have to provide group information to compute fair predictions
y_pred_test = fair_clf(X=X_test, group=group_test)
How it works
Given a callable score-based predictor (i.e., y_pred = predictor(X)
), and some (X, Y, S)
data to fit, RelaxedEqualOdds
will:
- Compute group-specific ROC curves and their convex hulls;
- Compute the
r
-relaxed optimal solution for the chosen fairness criterion (using cvxpy); - Find the set of group-specific binary classifiers that match the optimal solution found.
- each group-specific classifier is made up of (possibly randomized) group-specific thresholds over the given predictor;
- if a group's ROC point is in the interior of its ROC curve, partial randomization of its predictions may be necessary.
Implementation road-map
We welcome community contributions for cvxpy implementations of other fairness constraints.
Currently implemented fairness constraints:
- equality of odds (Hardt et al., 2016);
- i.e., equal group-specific TPR and FPR;
Road-map:
- equal opportunity;
- i.e., equal group-specific TPR;
- demographic parity;
- i.e., equal group-specific predicted prevalence;
Citing
This repository contains code and supplementary materials for the following preprint:
André F. Cruz and Moritz Hardt. "Unprocessing Seven Years of Algorithmic Fairness." arXiv preprint, 2023.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for error_parity-0.1.4-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7fe73c3a05275aa8fc085dda59674b9817fbfcd2043f0f5029eac55518b97576 |
|
MD5 | 3ee88808a0db906c5def6f7813d240c5 |
|
BLAKE2b-256 | 88090bade8e3b3ede4375e794c8267848863f0001c10a0964b72b68f1fe02392 |