Skip to main content

Fairness Indicators

Project description

Fairness Indicators BETA

Fairness_Indicators

Fairness Indicators is designed to support teams in evaluating and improving models for fairness concerns in partnership with the broader Tensorflow toolkit.

The tool is currently actively used internally by many of our products, and is now available in BETA for you to try for your own use cases. We would love to partner with you to understand where Fairness Indicators is most useful, and where added functionality would be valuable. Please reach out at tfx@tensorflow.org. You can provide any feedback on your experience, and feature requests, here.

Key links

What is Fairness Indicators?

Fairness Indicators enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers.

Many existing tools for evaluating fairness concerns don’t work well on large scale datasets and models. At Google, it is important for us to have tools that can work on billion-user systems. Fairness Indicators will allow you to evaluate across any size of use case.

In particular, Fairness Indicators includes the ability to:

  • Evaluate the distribution of datasets
  • Evaluate model performance, sliced across defined groups of users
    • Feel confident about your results with confidence intervals and evals at multiple thresholds
  • Dive deep into individual slices to explore root causes and opportunities for improvement

This case study, complete with videos and programming exercises, of how Fairness Indicators can be used on one of our own products to evaluate fairness concerns over time.

The pip package download includes:

  • Tensorflow Data Analysis (TFDV) [analyze distribution of your dataset]
  • Tensorflow Model Analysis (TFMA) [analyze model performance]
    • Fairness Indicators [an addition to TFMA that adds fairness metrics and the ability to easily compare performance across slices]
  • The What-If Tool (WIT) [an interactive visual interface designed to probe your models better]

How can I use Fairness Indicators?

Tensorflow Models

  • Access Fairness Indicators as part of the Evaluator component in Tensorflow Extended [docs]
  • Access Fairness Indicators in Tensorboard when evaluating other real-time metrics [docs]

Not using existing Tensorflow tools? No worries!

  • Download the Fairness Indicators pip package, and use Tensorflow Model Analysis as a standalone tool [docs]

Non-Tensorflow Models

  • Model Agnostic TFMA enables you to compute Fairness Indicators based on the output of any model [docs]

Examples

The examples directory contains several examples.

More questions?

For more information on how to think about fairness evaluation in the context of your use case, see this link.

If you have found a bug in Fairness Indicators, please file a GitHub issue with as much supporting information as you can provide.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairness_indicators-0.1.1.tar.gz (33.1 kB view details)

Uploaded Source

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

fairness_indicators-0.1.1-py3-none-any.whl (48.4 kB view details)

Uploaded Python 3

fairness_indicators-0.1.1-py2-none-any.whl (48.4 kB view details)

Uploaded Python 2

File details

Details for the file fairness_indicators-0.1.1.tar.gz.

File metadata

  • Download URL: fairness_indicators-0.1.1.tar.gz
  • Upload date:
  • Size: 33.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/45.1.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.5rc1

File hashes

Hashes for fairness_indicators-0.1.1.tar.gz
Algorithm Hash digest
SHA256 11a640d3ac954c9e00e79379ffe35920617aa292087feaf15c1898f7b50c7e64
MD5 10a694e913668d7c887b7fda132d8ca3
BLAKE2b-256 61a26891b71efca4f8ef049dcdad667b8538afee12ea4852113dfd4d7fc96c5d

See more details on using hashes here.

File details

Details for the file fairness_indicators-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: fairness_indicators-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 48.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/45.1.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.5rc1

File hashes

Hashes for fairness_indicators-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a7ee00ee4386401d1a711dfa92f89831575278d4cd87af75f326be7e24cb0572
MD5 f210e9bb56426bdf47b3b89855e5160c
BLAKE2b-256 5fc19da791f2f3f77d88a96d6c1a6c1389a939e009f1a45b78ad8fb9d40cf75c

See more details on using hashes here.

File details

Details for the file fairness_indicators-0.1.1-py2-none-any.whl.

File metadata

  • Download URL: fairness_indicators-0.1.1-py2-none-any.whl
  • Upload date:
  • Size: 48.4 kB
  • Tags: Python 2
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/45.1.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.7.5rc1

File hashes

Hashes for fairness_indicators-0.1.1-py2-none-any.whl
Algorithm Hash digest
SHA256 3cb50e37d28728e205ccb7f92bef24c82d9329bf84648615301743f1edb92918
MD5 7dffd664c6c563480367fee5f9de6227
BLAKE2b-256 5c5f7374da1186ca340a829dcea21bb63ada14ad36cb0333162ef91921571ba6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page