AI Verify Fairness Metrics Toolbox (FMT) for Classification contains a list of fairness metrics to measure how resources (e.g. opportunities, food, loan, medical help) are allocated among the demographic groups (e.g. married male, married female) given a set of sensitive feature(s) (e.g. gender, marital status). This plugin is developed for classification models.
Project description
Algorithm - Fairness Metrics Toolbox for Classification
Description
- The Fairness Metrics Toolbox (FMT) for Classification contains a list of fairness metrics to measure how resources (e.g. opportunities, food, loan, medical help) are allocated among the demographic groups (e.g. married male, married female) given a set of sensitive feature(s) (e.g. gender, marital status). This plugin is developed for classification models.
License
- Licensed under Apache Software License 2.0
Developers:
- AI Verify
Installation
Each test algorithm can now be installed via pip and run individually.
pip install aiverify-fairness-metrics-toolbox-for-classification==2.0.0a1
Example Usage:
Run the below bash script to execute the plugin
#!/bin/bash
root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_fairness_metrics_toolbox_for_classification \
--data_path $root_path/data/sample_mc_pipeline_toxic_data.sav \
--model_path $root_path/pipeline/mc_tabular_toxic \
--ground_truth_path $root_path/data/sample_mc_pipeline_toxic_ytest_data.sav \
--ground_truth toxic \
--model_type CLASSIFICATION \
--run_pipeline \
--sensitive_features_list gender
If the algorithm runs successfully, the results of the test will be saved in an output
folder.
Develop plugin locally
Execute the bash script below in the project root
#!/bin/bash
# setup virtual environment
python3 -m venv .venv
source .venv/bin/activate
# execute plugin
cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
# install aiverify-test-engine
pip install -e '.[dev]'
python -m aiverify_fairness_metrics_toolbox_for_classification --data_path <data_path> --model_path <model_path> --ground_truth_path <ground_truth_path> --ground_truth <str> --model_type CLASSIFICATION --run_pipeline --sensitive_features_list <list[str]> --annotated_labels_path <annotated_file_path> --file_name_label <str>
Build Plugin
cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
hatch build
Tests
Pytest is used as the testing framework.
Run the following steps to execute the unit and integration tests inside the tests/
folder
cd aiverify/stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/
pytest .
Run using Docker
In the aiverify root directory, run the below command to build the docker image
docker build -t aiverify-fairness-metrics-toolbox-for-classification:v2.0.0a1 -f stock-plugins/aiverify.stock.fairness-metrics-toolbox-for-classification/algorithms/fairness_metrics_toolbox_for_classification/Dockerfile .
Run the below bash script to run the algorithm
#!/bin/bash
docker run \
-v $(pwd)/stock-plugins/user_defined_files:/input \
-v $(pwd)/output:/app/aiverify/output \
aiverify-fairness-metrics-toolbox-for-classification:v2.0.0a1 \
--data_path /input/data/sample_mc_pipeline_toxic_data.sav \
--model_path /input/pipeline/mc_tabular_toxic \
--ground_truth_path /input/data/sample_mc_pipeline_toxic_ytest_data.sav \
--ground_truth toxic \
--model_type CLASSIFICATION \
--run_pipeline \
--sensitive_features_list gender
If the algorithm runs successfully, the results of the test will be saved in an output
folder in the working directory.
Tests
Pytest is used as the testing framework.
Run the following steps to execute the unit and integration tests inside the tests/
folder
docker run --entrypoint python3 aiverify-fairness-metrics-toolbox-for-classification:v2.0.0a1 -m pytest .
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file aiverify_fairness_metrics_toolbox_for_classification-2.0.0a1.tar.gz
.
File metadata
- Download URL: aiverify_fairness_metrics_toolbox_for_classification-2.0.0a1.tar.gz
- Upload date:
- Size: 23.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.27.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4aa6236a92101d7547bd0166ba40ce9a22b9ca9288e3da12de80278705f41e13 |
|
MD5 | a3b70c534ec9a1f3d61873d98bdc6850 |
|
BLAKE2b-256 | b87a9ea1a4613f11ad3e097934a66e6faa650c31b0ffb13a4d1ba4115905f21b |
File details
Details for the file aiverify_fairness_metrics_toolbox_for_classification-2.0.0a1-py3-none-any.whl
.
File metadata
- Download URL: aiverify_fairness_metrics_toolbox_for_classification-2.0.0a1-py3-none-any.whl
- Upload date:
- Size: 25.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.27.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 73024554df852a5b39f814e58bdbfc8cdb790bd52ad26e008df6bc303b1d338b |
|
MD5 | f85d5cd14cceb4fb9eaf071443c33dea |
|
BLAKE2b-256 | 0acfef9fe0b07f6fc5820e097fcc0017eb0404c786c68d64d554af55791ec1e0 |