No project description provided
Project description
FairGrad: Fairness Aware Gradient Descent
FairGrad, is an easy to use general purpose approach to enforce fairness for gradient descent based methods.
Getting started:
You can get fairgrad
from pypi, which means it can be easily installed via pip
:
pip install fairgrad
Documentation
The documenation can be found at read the docs
Example usage
To use fairgrad simply replace your pytorch cross entropy loss with fairgrad cross entropy loss. Alongside, regular pytorch cross entropy arguments, it expects following extra arguments.
y_train (np.asarray[int], Tensor, optional): All train example's corresponding label
s_train (np.asarray[int], Tensor, optional): All train example's corresponding sensitive attribute. This means if there
are 2 sensitive attributes, with each of them being binary. For instance gender - (male and female) and
age (above 45, below 45). Total unique sentive attributes are 4.
fairness_measure (string): Currently we support "equal_odds", "equal_opportunity", "accuracy_parity", and
"demographic_parity". Note that demographic parity is only supported for binary case.
epsilon (float, optional): The slack which is allowed for the final fairness level.
fairness_rate (float, optional): Parameter which intertwines current fairness weights with sum of previous fairness rates.
# Note this is short snippet. One still needs to models and iterators.
# Full worked out example is available here - @TODO
from fairgrad.torch import CrossEntropyLoss
# define cross entropy loss
criterion = CrossEntropyLoss(fairness_related_meta_data=fairness_related_meta_data)
# Train loop
for inputs, labels, protected_attributes in train_iterator:
model.train()
optimizer.zero_grad()
output = model(inputs)
loss = criterion(output, labels, protected_attributes, mode='train')
loss.backward()
optimizer.step()
We highly recommend to standardize features by removing the mean and scaling to unit variance. This can be done using standard scalar module in sklearn.
Citation
@article{maheshwari2022fairgrad,
title={FairGrad: Fairness Aware Gradient Descent},
author={Maheshwari, Gaurav and Perrot, Micha{\"e}l},
journal={arXiv preprint arXiv:2206.10923},
year={2022}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fairgrad-0.2.0.tar.gz
.
File metadata
- Download URL: fairgrad-0.2.0.tar.gz
- Upload date:
- Size: 8.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.14 CPython/3.8.12 Linux/5.15.0-47-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1f62b940dda49ac75ac79ee4366701a00f2188a1c82db440f48e9aa1f1403b8a |
|
MD5 | b1e9ac813795937b2b3a671556cb0b43 |
|
BLAKE2b-256 | 75d21cf0705aca69b403634864f432280c5ae287eb255a128f7b315be4124b99 |
File details
Details for the file fairgrad-0.2.0-py3-none-any.whl
.
File metadata
- Download URL: fairgrad-0.2.0-py3-none-any.whl
- Upload date:
- Size: 9.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.1.14 CPython/3.8.12 Linux/5.15.0-47-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 101665cae526399aeb22a2b3e4ad42f4b37f6838b4f610997779e4c5bc8aba2a |
|
MD5 | c856a7bfd6785b70cd36db70d6847288 |
|
BLAKE2b-256 | dec41729bb9c6535f9c0564f42affb3b16e08abd3c3aac5080a3b9b76858c3ec |