Skip to main content

AI Verify Fairness Metrics Toolbox (FMT) for Classification contains a list of fairness metrics to measure how resources (e.g. opportunities, food, loan, medical help) are allocated among the demographic groups (e.g. married male, married female) given a set of sensitive feature(s) (e.g. gender, marital status). This plugin is developed for classification models.

Project description

Algorithm - Fairness Metrics Toolbox for Classification

Description

  • The Fairness Metrics Toolbox (FMT) for Classification contains a list of fairness metrics to measure how resources (e.g. opportunities, food, loan, medical help) are allocated among the demographic groups (e.g. married male, married female) given a set of sensitive feature(s) (e.g. gender, marital status). This plugin is developed for classification models.

License

  • Licensed under Apache Software License 2.0

Developers:

  • AI Verify

Installation

Each test algorithm can now be installed via pip and run individually.

pip install aiverify-fairness-metrics-toolbox-for-classification

Example Usage:

Run the below bash script to execute the plugin

#!/bin/bash

root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_fairness_metrics_toolbox_for_classification \
  --data_path $root_path/data/sample_mc_toxic_data.sav \
  --model_path $root_path/model/sample_mc_toxic_sklearn_linear.LogisticRegression.sav \
  --ground_truth_path $root_path/data/sample_mc_toxic_data.sav \
  --ground_truth toxic \
  --model_type CLASSIFICATION \
  --sensitive_features_list gender

If the algorithm runs successfully, the results of the test will be saved in an output folder.

Develop plugin locally

Assuming aiverify-test-engine has already been installed in the virtual environment, run the following bash script to install the plugin and execute a test:

#!/bin/bash

# setup virtual environment
python3 -m venv .venv
source .venv/bin/activate

# install plugin
cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
pip install .

python -m aiverify_fairness_metrics_toolbox_for_classification --data_path  <data_path> --model_path <model_path> --ground_truth_path <ground_truth_path> --ground_truth <str> --model_type CLASSIFICATION --run_pipeline --sensitive_features_list <list[str]> --annotated_labels_path <annotated_file_path> --file_name_label <str>

Build Plugin

cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
hatch build

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
pytest .

Run using Docker

In the aiverify root directory, run the below command to build the docker image

docker build -t aiverify-fairness-metrics-toolbox-for-classification -f stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/Dockerfile .

Run the below bash script to run the algorithm

#!/bin/bash
docker run \
  -v $(pwd)/stock-plugins/user_defined_files:/input \
  -v $(pwd)/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/output:/app/aiverify/output \
  aiverify-fairness-metrics-toolbox-for-classification \
  --data_path /input/data/sample_mc_pipeline_toxic_data.sav \
  --model_path /model/sample_mc_toxic_sklearn_linear.LogisticRegression.sav \
  --ground_truth_path /input/data/sample_mc_pipeline_toxic_ytest_data.sav \
  --ground_truth toxic \
  --model_type CLASSIFICATION \
  --sensitive_features_list gender

If the algorithm runs successfully, the results of the test will be saved in an output folder in the algorithm directory.

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

docker run \
  --entrypoint python3 \
  -w /app/aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification \
  aiverify-fairness-metrics-toolbox-for-classification \
  -m pytest .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file aiverify_fairness_metrics_toolbox_for_classification-2.2.1.tar.gz.

File metadata

File hashes

Hashes for aiverify_fairness_metrics_toolbox_for_classification-2.2.1.tar.gz
Algorithm Hash digest
SHA256 ee69934e983912f656fc9fe19beb6184d4c5b7548820ad1cf59b53fc2761bae7
MD5 00e29544e930ec74a1bb27dc660b351c
BLAKE2b-256 56c794b59fcbe94f8d657705fffd58137bb336243841036cd9e94440bddf2299

See more details on using hashes here.

File details

Details for the file aiverify_fairness_metrics_toolbox_for_classification-2.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for aiverify_fairness_metrics_toolbox_for_classification-2.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3c793595f5f31768ee26b788db67180c9a2fe714f638011d6dc703f7c8230197
MD5 955a74f500c09ba1c18677f554c53c8c
BLAKE2b-256 6884ab65f248a9c4bd74e875a00b48beb75790570480782989f2c1c1e1b4f55c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page