Skip to main content

fairness and evaluation library

Project description

Jurity: Fairness & Evaluation Library

Jurity is a research library that provides fairness metrics, recommender system evaluations and bias mitigation techniques. The library adheres to PEP-8 standards and is tested heavily.

Jurity is developed by the Artificial Intelligence Center of Excellence at Fidelity Investments.

Fairness Metrics

  • Average Odds
  • Disparate Impact
  • Equal Opportunity
  • False Negative Rate (FNR) Difference
  • Generalized Entropy Index
  • Predictive Equality
  • Statistical Parity
  • Theil Index

Binary Bias Mitigation Techniques

  • Equalized Odds

Recommenders Metrics

  • CTR: Click-through rate
  • NDCG: Normalized discounted cumulative gain
  • MAP@K: Mean Average Precision
  • Precision@K
  • Recall@K

Quick Start: Fairness Evaluation

# Import binary and multi-class fairness metrics
from jurity.fairness import BinaryFairnessMetrics, MultiClassFairnessMetrics

# Data
binary_predictions = [1, 1, 0, 1, 0, 0]
multi_class_predictions = ["a", "b", "c", "b", "a", "a"]
multi_class_multi_label_predictions = [["a", "b"], ["b", "c"], ["b"], ["a", "b"], ["c", "a"], ["c"]]
is_member = [0, 0, 0, 1, 1, 1]
classes = ["a", "b", "c"]

# Metrics (see also other available metrics)
metric = BinaryFairnessMetrics.StatisticalParity()
multi_metric = MultiClassFairnessMetrics.StatisticalParity(classes)

# Scores
print("Metric:", metric.description)
print("Lower Bound: ", metric.lower_bound)
print("Upper Bound: ", metric.upper_bound)
print("Ideal Value: ", metric.ideal_value)
print("Binary Fairness score: ", metric.get_score(binary_predictions, is_member))
print("Multi-class Fairness scores: ", multi_metric.get_scores(multi_class_predictions, is_member))
print("Multi-class multi-label Fairness scores: ", multi_metric.get_scores(multi_class_multi_label_predictions, is_member))

Quick Start: Bias Mitigation

# Import binary fairness and binary bias mitigation
from jurity.mitigation import BinaryMitigation
from jurity.fairness import BinaryFairnessMetrics

# Data
labels = [1, 1, 0, 1, 0, 0, 1, 0]
predictions = [0, 0, 0, 1, 1, 1, 1, 0]
likelihoods = [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.1]
is_member = [0, 0, 0, 0, 1, 1, 1, 1]

# Bias Mitigation
mitigation = BinaryMitigation.EqualizedOdds()

# Training: Learn mixing rates from the labeled data
mitigation.fit(labels, predictions, likelihoods, is_member)

# Testing: Mitigate bias in predictions
fair_predictions, fair_likelihoods = mitigation.transform(predictions, likelihoods, is_member)

# Scores: Fairness before and after
print("Fairness Metrics Before:", BinaryFairnessMetrics().get_all_scores(labels, predictions, is_member), '\n'+30*'-')
print("Fairness Metrics After:", BinaryFairnessMetrics().get_all_scores(labels, fair_predictions, is_member))

Quick Start: Recommenders Evaluation

# Import recommenders metrics
from jurity.recommenders import BinaryRecoMetrics, RankingRecoMetrics
import pandas as pd

# Data
actual = pd.DataFrame({"user_id": [1, 2, 3, 4], "item_id": [1, 2, 0, 3], "clicks": [0, 1, 0, 0]})
predicted = pd.DataFrame({"user_id": [1, 2, 3, 4], "item_id": [1, 2, 2, 3], "clicks": [0.8, 0.7, 0.8, 0.7]})

# Metrics
ctr = BinaryRecoMetrics.CTR(click_column="clicks")
ncdg_k = RankingRecoMetrics.NDCG(click_column="clicks", k=3)
precision_k = RankingRecoMetrics.Precision(click_column="clicks", k=2)
recall_k = RankingRecoMetrics.Recall(click_column="clicks", k=2)
map_k = RankingRecoMetrics.MAP(click_column="clicks", k=2)

# Scores
print("CTR:", ctr.get_score(actual, predicted))
print("NCDG:", ncdg_k.get_score(actual, predicted))
print("Precision@K:", precision_k.get_score(actual, predicted))
print("Recall@K:", recall_k.get_score(actual, predicted))
print("MAP@K:", map_k.get_score(actual, predicted))

Installation

Requirements

The library requires Python 3.6+ and depends on standard packages such as pandas, numpy The requirements.txt lists the necessary packages.

Install from wheel package

After installing the requirements, you can install the library from the provided wheel package using the following commands:

pip install dist/jurity-X.X.X-py3-none-any.whl

Note: Don't forget to replace X.X.X with the current version number.

Install from source code

Alternatively, you can build a wheel package on your platform from scratch using the source code:

pip install setuptools wheel # if wheel is not installed
python setup.py bdist_wheel
pip install dist/jurity-X.X.X-py3-none-any.whl

Test Your Setup

To confirm that cloning the repo was successful, run the first example in the Quick Start. To confirm that the whole installation was successful, run the tests and all should pass.

python -m unittest discover -v tests

Upgrading the Library

To upgrade to the latest version of the library, run git pull origin master in the repo folder, and then run pip install --upgrade --no-cache-dir dist/jurity-X.X.X-py3-none-any.whl.

Support

Please submit bug reports and feature requests as Issues. You can also submit any additional questions or feedback as issues.

License

Jurity is licensed under the Apache License 2.0.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

jurity-1.0.0-py3-none-any.whl (54.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page