Skip to main content

AI Verify Fairness Metrics Toolbox (FMT) for Classification contains a list of fairness metrics to measure how resources (e.g. opportunities, food, loan, medical help) are allocated among the demographic groups (e.g. married male, married female) given a set of sensitive feature(s) (e.g. gender, marital status). This plugin is developed for classification models.

Project description

Algorithm - Fairness Metrics Toolbox for Classification

Description

  • The Fairness Metrics Toolbox (FMT) for Classification contains a list of fairness metrics to measure how resources (e.g. opportunities, food, loan, medical help) are allocated among the demographic groups (e.g. married male, married female) given a set of sensitive feature(s) (e.g. gender, marital status). This plugin is developed for classification models.

License

  • Licensed under Apache Software License 2.0

Developers:

  • AI Verify

Installation

Each test algorithm can now be installed via pip and run individually.

pip install aiverify-fairness-metrics-toolbox-for-classification

Example Usage:

Run the below bash script to execute the plugin

#!/bin/bash

root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_fairness_metrics_toolbox_for_classification \
  --data_path $root_path/data/sample_mc_toxic_data.sav \
  --model_path $root_path/model/sample_mc_toxic_sklearn_linear.LogisticRegression.sav \
  --ground_truth_path $root_path/data/sample_mc_toxic_data.sav \
  --ground_truth toxic \
  --model_type CLASSIFICATION \
  --sensitive_features_list gender

If the algorithm runs successfully, the results of the test will be saved in an output folder.

Develop plugin locally

Assuming aiverify-test-engine has already been installed in the virtual environment, run the following bash script to install the plugin and execute a test:

#!/bin/bash

# setup virtual environment
python3 -m venv .venv
source .venv/bin/activate

# install plugin
cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
pip install .

python -m aiverify_fairness_metrics_toolbox_for_classification --data_path  <data_path> --model_path <model_path> --ground_truth_path <ground_truth_path> --ground_truth <str> --model_type CLASSIFICATION --run_pipeline --sensitive_features_list <list[str]> --annotated_labels_path <annotated_file_path> --file_name_label <str>

Build Plugin

cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
hatch build

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
pytest .

Run using Docker

In the aiverify root directory, run the below command to build the docker image

docker build -t aiverify-fairness-metrics-toolbox-for-classification -f stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/Dockerfile .

Run the below bash script to run the algorithm

#!/bin/bash
docker run \
  -v $(pwd)/stock-plugins/user_defined_files:/input \
  -v $(pwd)/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/output:/app/aiverify/output \
  aiverify-fairness-metrics-toolbox-for-classification \
  --data_path /input/data/sample_mc_pipeline_toxic_data.sav \
  --model_path /model/sample_mc_toxic_sklearn_linear.LogisticRegression.sav \
  --ground_truth_path /input/data/sample_mc_pipeline_toxic_ytest_data.sav \
  --ground_truth toxic \
  --model_type CLASSIFICATION \
  --sensitive_features_list gender

If the algorithm runs successfully, the results of the test will be saved in an output folder in the algorithm directory.

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

docker run \
  --entrypoint python3 \
  -w /app/aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification \
  aiverify-fairness-metrics-toolbox-for-classification \
  -m pytest .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file aiverify_fairness_metrics_toolbox_for_classification-2.2.0.tar.gz.

File metadata

File hashes

Hashes for aiverify_fairness_metrics_toolbox_for_classification-2.2.0.tar.gz
Algorithm Hash digest
SHA256 4710bee5d79662cff6f4420e83aa6508daf622822f71cb1b8d5e94584f7cc91a
MD5 cce1d92c2da015fe9f756f34657820e0
BLAKE2b-256 59d4f22c78dd40350625cc50e6692c82b82ae71b6937b4a19ea9d15f0f4347f1

See more details on using hashes here.

File details

Details for the file aiverify_fairness_metrics_toolbox_for_classification-2.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aiverify_fairness_metrics_toolbox_for_classification-2.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 cc47fefbf200a5c60bd517189a5ba8f3ddbe20c5ba11624676874030123b6178
MD5 a85ff98e5c84bd171de6db8439b7d5f4
BLAKE2b-256 3695bf78984458ae4110bfbc1f8e2dc92b8e83bdcebda3f0445a86125daa1219

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page