Skip to main content

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰

Project description

Responsibly

https://img.shields.io/badge/docs-passing-brightgreen.svg Join the chat at https://gitter.im/ResponsiblyAI/responsibly https://img.shields.io/github/workflow/status/ResponsiblyAI/responsibly/CI/master.svg https://img.shields.io/coveralls/ResponsiblyAI/responsibly/master.svg https://img.shields.io/scrutinizer/g/ResponsiblyAI/responsibly.svg https://img.shields.io/pypi/v/responsibly.svg https://img.shields.io/github/license/ResponsiblyAI/responsibly.svg

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰

Responsibly is developed for practitioners and researchers in mind, but also for learners. Therefore, it is compatible with data science and machine learning tools of trade in Python, such as Numpy, Pandas, and especially scikit-learn.

The primary goal is to be one-shop-stop for auditing bias and fairness of machine learning systems, and the secondary one is to mitigate bias and adjust fairness through algorithmic interventions. Besides, there is a particular focus on NLP models.

Responsibly consists of three sub-packages:

  1. responsibly.dataset

    Collection of common benchmark datasets from fairness research.

  2. responsibly.fairness

    Demographic fairness in binary classification, including metrics and algorithmic interventions.

  3. responsibly.we

    Metrics and debiasing methods for bias (such as gender and race) in word embedding.

For fairness, Responsibly’s functionality is aligned with the book Fairness and Machine Learning - Limitations and Opportunities by Solon Barocas, Moritz Hardt and Arvind Narayanan.

If you would like to ask for a feature or report a bug, please open a new issue or write us in Gitter.

Requirements

  • Python 3.6+

Installation

Install responsibly with pip:

$ pip install responsibly

or directly from the source code:

$ git clone https://github.com/ResponsiblyAI/responsibly.git
$ cd responsibly
$ python setup.py install

Citation

If you have used Responsibly in a scientific publication, we would appreciate citations to the following:

@Misc{,
  author = {Shlomi Hod},
  title =  {{Responsibly}: Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems},
  year =   {2018--},
  url =    "http://docs.responsibly.ai/",
  note =   {[Online; accessed <today>]}
}

Revision History

0.1.3 (2021/04/02)

  • Fix new pagacke dependencies

  • Switch from Travis CI to Github Actions

0.1.2 (2020/09/15)

  • Fix Travis CI issues with pipenv

  • Fix bugs with word embedding bias

0.1.1 (2019/08/04)

  • Fix a dependencies issue with smart_open

  • Change URLs to https

0.1.0 (2019/07/31)

  • Rename the project to responsibly from ethically

  • Word embedding bias

    • Improve functionality of BiasWordEmbedding

  • Threshold fairness interventions

    • Fix bugs with ROCs handling

    • Improve API and add functionality (plot_thresholds)

0.0.5 (2019/06/14)

  • Word embedding bias

    • Fix bug in computing WEAT

    • Computing and plotting factual property association to projections on a bias direction, similar to WEFAT

0.0.4 (2019/06/03)

  • Word embedding bias

    • Unrestricted most_similar

    • Unrestricted generate_analogies

    • Running specific experiments with calc_all_weat

    • Plotting clustering by classification of biased neutral words

0.0.3 (2019/04/10)

  • Fairness in Classification

    • Three demographic fairness criteria

      • Independence

      • Separation

      • Sufficiency

    • Equalized odds post-processing algorithmic interventions

    • Complete two notebook demos (FICO and COMPAS)

  • Word embedding bias

    • Measuring bias with WEAT method

  • Documentation improvements

  • Fixing security issues with dependencies

0.0.2 (2018/09/01)

  • Word embedding bias

    • Generating analogies along the bias direction

    • Standard evaluations of word embedding (word pairs and analogies)

    • Plotting indirect bias

    • Scatter plot of bias direction projections between two word embedding

    • Improved verbose mode

0.0.1 (2018/08/17)

  • Gender debiasing for word embedding based on Bolukbasi et al.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

responsibly-0.1.3.tar.gz (28.1 MB view details)

Uploaded Source

Built Distribution

responsibly-0.1.3-py3-none-any.whl (28.2 MB view details)

Uploaded Python 3

File details

Details for the file responsibly-0.1.3.tar.gz.

File metadata

  • Download URL: responsibly-0.1.3.tar.gz
  • Upload date:
  • Size: 28.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.10.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.5

File hashes

Hashes for responsibly-0.1.3.tar.gz
Algorithm Hash digest
SHA256 826e52ab6f93d6309be27bb52f504c8a7a36ef75e7491bab99dbce1ca45d47b1
MD5 004268df4c3ff7c8f70cc3c9d989cf27
BLAKE2b-256 b18790fd2d7195f60c19b34fa84b556d397f96275a5eb29190ba60bea8784641

See more details on using hashes here.

File details

Details for the file responsibly-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: responsibly-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 28.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/3.10.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.59.0 CPython/3.7.5

File hashes

Hashes for responsibly-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 9959a3271fda897f0962fad4a7bcbd1146b3f60eb854e4890542ba9cb232b285
MD5 8fa45bbc50f632238a038dfa49014208
BLAKE2b-256 5aa51cbc6653d0fbdba238934112eeada26219e4aebec69632d15555c2546dd5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page