Skip to main content
Donate to the Python Software Foundation or Purchase a PyCharm License to Benefit the PSF! Donate Now

('Toolkit for Auditing and Mitigating Bias

Project description

Ethically Join the chat at

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems ๐Ÿ”Ž๐Ÿค–๐Ÿ”ง

Ethically is developed for practitioners and researchers in mind, but also for learners. Therefore, it is compatible with data science and machine learning tools of trade in Python, such as Numpy, Pandas, and especially scikit-learn.

The primary goal is to be one-shop-stop for auditing bias and fairness of machine learning systems, and the secondary one is to mitigate bias and adjust fairness through algorithmic interventions. Besides, there is a particular focus on NLP models.

Ethically consists of three sub-packages:

  1. ethically.dataset
    Collection of common benchmark datasets from fairness research.
  2. ethically.fairness
    Demographic fairness in binary classification, including metrics and algorithmic interventions.
  3. ethically.we
    Metrics and debiasing methods for bias (such as gender and race) in words embedding.

For fairness, Ethicallyโ€™s functionality is aligned with the book Fairness and Machine Learning - Limitations and Opportunities by Solon Barocas, Moritz Hardt and Arvind Narayanan.

If you would like to ask for a feature or report a bug, please open a new issue or write us in Gitter.


  • Python 3.5+


Install ethically with pip:

$ pip install ethically

or directly from the source code:

$ git clone
$ cd ethically
$ python install


If you have used Ethically in a scientific publication, we would appreciate citations to the following:

  author =    {Shlomi Hod},
  title =     {{Ethically}: Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems},
  year =      {2018--},
  url = "",
  note = {[Online; accessed <today>]}

Revision History

0.0.3 (2019/04/10)

  • Fairness in Classification
    • Three demographic fairness criteria
      • Independence
      • Separation
      • Sufficiency
    • Equalized odds post-processing algorithmic interventions
    • Complete two notebook demos (FICO and COMPAS)
  • Words embedding bias
    • Measuring bias with WEAT method
  • Documentation improvements
  • Fixing security issues with dependencies

0.0.2 (2018/09/01)

  • Words embedding bias
    • Generating analogies along the bias direction
    • Standard evaluations of words embedding (word pairs and analogies)
    • Plotting indirect bias
    • Scatter plot of bias direction projections between two words embedding
    • Improved verbose mode

0.0.1 (2018/08/17)

  • Gender debiasing for words embedding based on Bolukbasi et al.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
ethically-0.0.3-py3-none-any.whl (28.2 MB) Copy SHA256 hash SHA256 Wheel py3
ethically-0.0.3.tar.gz (28.1 MB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page