fairness and evaluation library
Project description
Jurity: Fairness & Evaluation Library
Jurity is a research library that provides fairness metrics, recommender system evaluations, classification metrics and bias mitigation techniques. The library adheres to PEP-8 standards and is tested heavily.
Jurity is developed by the Artificial Intelligence Center of Excellence at Fidelity Investments. Documentation is available at fidelity.github.io/jurity.
Fairness Metrics
- Average Odds
- Disparate Impact
- Equal Opportunity
- False Negative Rate (FNR) Difference
- Generalized Entropy Index
- Predictive Equality
- Statistical Parity
- Theil Index
Binary Bias Mitigation Techniques
Recommenders Metrics
- AUC: Area Under the Curve
- CTR: Click-through rate
- DR: Doubly robust estimation
- IPS: Inverse propensity scoring
- MAP@K: Mean Average Precision
- NDCG: Normalized discounted cumulative gain
- Precision@K
- Recall@K
- Inter-List Diversity@K
- Intra-List Diversity@K
Classification Metrics
Quick Start: Fairness Evaluation
# Import binary and multi-class fairness metrics
from jurity.fairness import BinaryFairnessMetrics, MultiClassFairnessMetrics
# Data
binary_predictions = [1, 1, 0, 1, 0, 0]
multi_class_predictions = ["a", "b", "c", "b", "a", "a"]
multi_class_multi_label_predictions = [["a", "b"], ["b", "c"], ["b"], ["a", "b"], ["c", "a"], ["c"]]
is_member = [0, 0, 0, 1, 1, 1]
classes = ["a", "b", "c"]
# Metrics (see also other available metrics)
metric = BinaryFairnessMetrics.StatisticalParity()
multi_metric = MultiClassFairnessMetrics.StatisticalParity(classes)
# Scores
print("Metric:", metric.description)
print("Lower Bound: ", metric.lower_bound)
print("Upper Bound: ", metric.upper_bound)
print("Ideal Value: ", metric.ideal_value)
print("Binary Fairness score: ", metric.get_score(binary_predictions, is_member))
print("Multi-class Fairness scores: ", multi_metric.get_scores(multi_class_predictions, is_member))
print("Multi-class multi-label Fairness scores: ", multi_metric.get_scores(multi_class_multi_label_predictions, is_member))
Quick Start: Bias Mitigation
# Import binary fairness and binary bias mitigation
from jurity.mitigation import BinaryMitigation
from jurity.fairness import BinaryFairnessMetrics
# Data
labels = [1, 1, 0, 1, 0, 0, 1, 0]
predictions = [0, 0, 0, 1, 1, 1, 1, 0]
likelihoods = [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.1]
is_member = [0, 0, 0, 0, 1, 1, 1, 1]
# Bias Mitigation
mitigation = BinaryMitigation.EqualizedOdds()
# Training: Learn mixing rates from the labeled data
mitigation.fit(labels, predictions, likelihoods, is_member)
# Testing: Mitigate bias in predictions
fair_predictions, fair_likelihoods = mitigation.transform(predictions, likelihoods, is_member)
# Scores: Fairness before and after
print("Fairness Metrics Before:", BinaryFairnessMetrics().get_all_scores(labels, predictions, is_member), '\n'+30*'-')
print("Fairness Metrics After:", BinaryFairnessMetrics().get_all_scores(labels, fair_predictions, is_member))
Quick Start: Recommenders Evaluation
# Import recommenders metrics
from jurity.recommenders import BinaryRecoMetrics, RankingRecoMetrics, DiversityRecoMetrics
import pandas as pd
# Data
actual = pd.DataFrame({"user_id": [1, 2, 3, 4], "item_id": [1, 2, 0, 3], "clicks": [0, 1, 0, 0]})
predicted = pd.DataFrame({"user_id": [1, 2, 3, 4], "item_id": [1, 2, 2, 3], "clicks": [0.8, 0.7, 0.8, 0.7]})
item_features = pd.DataFrame({"item_id": [0, 1, 2, 3], "feature1": [1, 2, 2, 1], "feature2": [0.8, 0.7, 0.8, 0.7]})
# Metrics
auc = BinaryRecoMetrics.AUC(click_column="clicks")
ctr = BinaryRecoMetrics.CTR(click_column="clicks")
dr = BinaryRecoMetrics.CTR(click_column="clicks", estimation='dr')
ips = BinaryRecoMetrics.CTR(click_column="clicks", estimation='ips')
map_k = RankingRecoMetrics.MAP(click_column="clicks", k=2)
ncdg_k = RankingRecoMetrics.NDCG(click_column="clicks", k=3)
precision_k = RankingRecoMetrics.Precision(click_column="clicks", k=2)
recall_k = RankingRecoMetrics.Recall(click_column="clicks", k=2)
interlist_diversity_k = DiversityRecoMetrics.InterListDiversity(click_column="clicks", k=2)
intralist_diversity_k = DiversityRecoMetrics.IntraListDiversity(item_features, click_column="clicks", k=2)
# Scores
print("AUC:", auc.get_score(actual, predicted))
print("CTR:", ctr.get_score(actual, predicted))
print("Doubly Robust:", dr.get_score(actual, predicted))
print("IPS:", ips.get_score(actual, predicted))
print("MAP@K:", map_k.get_score(actual, predicted))
print("NCDG:", ncdg_k.get_score(actual, predicted))
print("Precision@K:", precision_k.get_score(actual, predicted))
print("Recall@K:", recall_k.get_score(actual, predicted))
print("Inter-List Diversity@K:", interlist_diversity_k.get_score(actual, predicted))
print("Intra-List Diversity@K:", intralist_diversity_k.get_score(actual, predicted))
Quick Start: Classification Evaluation
# Import classification metrics
from jurity.classification import BinaryClassificationMetrics
# Data
labels = [1, 1, 0, 1, 0, 0, 1, 0]
predictions = [0, 0, 0, 1, 1, 1, 1, 0]
likelihoods = [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.1]
is_member = [0, 0, 0, 0, 1, 1, 1, 1]
# Available: Accuracy, F1, Precision, Recall, and AUC
f1_score = BinaryClassificationMetrics.F1()
print('F1 score is', f1_score.get_score(predictions, labels))
Installation
Jurity is available to install as pip install jurity
. It can also be installed by building from source by following the instructions in our documentation.
Support
Please submit bug reports and feature requests as Issues.
License
Jurity is licensed under the Apache License 2.0.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.