Skip to main content

This library allow to compute global sensitivity indices in the context of fairness measurements.

Project description

logo fairsense logo fairsense

FairSense

This library allow to compute global sensitivity indices in the context of fairness measurements. The paper Fairness seen as Global Sensitivity Analysis bridges the gap between global sensitivity analysis (GSA) and fairness. It states that for each sensitivity analysis, there is a fairness measure, and vice-versa.

@misc{https://doi.org/10.48550/arxiv.2103.04613,
  doi = {10.48550/ARXIV.2103.04613},
  url = {https://arxiv.org/abs/2103.04613},
  author = {Bénesse, Clément and Gamboa, Fabrice and Loubes, Jean-Michel and Boissin, Thibaut},
  keywords = {Statistics Theory (math.ST), Methodology (stat.ME), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Fairness seen as Global Sensitivity Analysis},

This library is a toolbox which ease the computation of fairness and GSA indices.

👉 The problem

Each index has it's characteristics: some can be applied on continuous variables and some cannot. Some can handle regression problems and some handle classification problems. Some can handle variable groups and some cannot. Finally some can only be applied on the predictions of a model while some can be applied on the error made by the model.

The objective is then to provide a tool to investigate the fairness of an ML problem by computing the GSA indices while avoiding the aforementioned issues.

🚀 The strategy

The library allows to formulate a fairness problem which is stated as following:

  • a dataset describing the training distribution
  • a model which can be a function or a machine learning model
  • a fairness objective which indicate what should be studied : one can study the intrinsic bias of a dataset, or the bias of the model or the bias of the model's errors

These elements are encapsulated in an object called IndicesInput.

Then it becomes possible to compute GSA indices (in a interchangeable way) using the functions provided in fairsense.indices.

These functions output IndicesOutput objects that encapsulate the values of the indices. These results can finally be visualized with the functions available in the fairsense.visualization module.

💻 install fairsense

‍for users

pip install fairsense

for developpers

After cloning the repository

pip install -e .[dev]

to clean code, at the root of the lib:

black .

for docs

pip install -e .[docs]

build rst files, in the docs folder:

sphinx-apidoc ..\libfairness -o source

the generate html docs:

make html

Warning: the library must be installed to generate the doc.

👍 Contributing

Feel free to propose your ideas or come and contribute with us on the Libname toolbox! We have a specific document where we describe in a simple way how to make your first pull request: just here.

👀 See Also

More from the DEEL project:

  • Xplique a Python library exclusively dedicated to explaining neural networks.
  • deel-lip a Python library for training k-Lipschitz neural networks on TF.
  • Influenciae Python toolkit dedicated to computing influence values for the discovery of potentially problematic samples in a dataset.
  • deel-torchlip a Python library for training k-Lipschitz neural networks on PyTorch.
  • DEEL White paper a summary of the DEEL team on the challenges of certifiable AI and the role of data quality, representativity and explainability for this purpose.

🙏 Acknowledgments

DEEL Logo This project received funding from the French ”Investing for the Future – PIA3” program within the Artificial and Natural Intelligence Toulouse Institute (ANITI). The authors gratefully acknowledge the support of the DEEL project.

🗞️ Citation

If you use fairsense as part of your workflow in a scientific publication, please consider citing the 🗞️ our paper:

    @misc{https://doi.org/10.48550/arxiv.2103.04613,
      doi = {10.48550/ARXIV.2103.04613},
      url = {https://arxiv.org/abs/2103.04613},
      author = {Bénesse, Clément and Gamboa, Fabrice and Loubes, Jean-Michel and Boissin, Thibaut},
      keywords = {Statistics Theory (math.ST), Methodology (stat.ME), FOS: Mathematics, FOS: Mathematics, FOS: Computer and information sciences, FOS: Computer and information sciences},
      title = {Fairness seen as Global Sensitivity Analysis},

📝 License

The package is released under MIT license.

💣 Disclaimer

To the maximum extent permitted by applicable law, authors of FairSense shall not be liable for any kind of tangible and intangible damages. Especially the authors shall not be liable in case of incorrect computation of the indices nor any biased interpretation of such indices.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairsense-0.0.1b0.tar.gz (18.3 kB view hashes)

Uploaded Source

Built Distribution

fairsense-0.0.1b0-py3-none-any.whl (28.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page