Skip to main content

Exposing Algorithmic Bias through Inverse Design.

Project description

pic2   pic5   pic1   pic8

pic6   pic7   pic3   pic4

AI systems can create, propagate, support, and automate bias in decision-making processes. To mitigate biased decisions, we both need to understand the origin of the bias and define what it means for an algorithm to make fair decisions. By Locating Unfairness through Canonical Inverse Design (LUCID), we generate a canonical set that shows the desired inputs for a model given a preferred output. The canonical set reveals the model’s internal logic and exposes potential unethical biases by repeatedly interrogating the decision-making process. By shifting the focus towards equality of treatment and looking into the algorithm’s internal workings, LUCID is a valuable addition to the toolbox of algorithmic fairness evaluation. Read our paper on LUCID for more details.

We encourage everyone to contribute to this project by submitting an issue or a pull request!

Installation

Install canonical_sets from PyPi.

pip install canonical_sets

For development install, see contribute.

Usage

LUCID can be used for the gradient-based inverse design to generate canonical sets, and is available for both PyTorch and Tensorflow models. It’s fully customizable, but can also be used out-of-the-box for a wide range of models by using its default settings:

from canonical_sets import LUCID

lucid = LUCID(model, outputs, example_data)
lucid.results.head()

It only requires a model, a preferred output, and an example input (which is often a part of the training data). The results are stored in a pd.DataFrame, and can be accessed by calling results.

For detailed examples see examples and for the source code see canonical_sets. We advice to start with either the tensorflow or pytorch example, and then the advanced example. If you have any remaining questions, feel free to submit an issue or PR!

Data

canonical_sets contains some functionality to easily access commonly used data sets in the fairness literature:

from canonical_sets import Adult, Compas

adult = Adult()
adult.train_data.head()

compas = Compas()
compas.train_data.head()

The default settings can be customized to change the pre-processing, splitting, etc. See examples for details.

Community

If you are interested in cross-disciplinary research related to machine learning, feel free to:

Disclaimer

The package and the code is provided “as-is” and there is NO WARRANTY of any kind. Use it only if the content and output files make sense to you.

Acknowledgements

This project benefited from financial support from Innoviris.

Citation

@inproceedings{mazijn_canonicalsets_2022,
  title={{Exposing Algorithmic Bias through Inverse Design}},
  author={Mazijn, Carmen and Prunkl, Carina and Algaba, Andres and Danckaert, Jan and Ginis, Vincent},
  booktitle={Workshop at International Conference on Machine Learning},
  year={2022},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

canonical_sets-0.0.1.tar.gz (14.8 kB view hashes)

Uploaded Source

Built Distribution

canonical_sets-0.0.1-py3-none-any.whl (17.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page