Skip to main content

AI Verify Robustness Toolbox generates a perturbed dataset using boundary attack algorithm on the test dataset.

Project description

Algorithm - Robustness Toolbox

License

  • Licensed under Apache Software License 2.0

Developers:

  • AI Verify

Installation

Each test algorithm can now be installed via pip and run individually.

pip install aiverify-robustness-toolbox

Example Usage:

The robustness plugin supports testing of tabular and image data.

Run the following bash script to execute the plugin on tabular data:

#!/bin/bash

root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_robustness_toolbox \
  --data_path $root_path/data/sample_bc_pipeline_credit_data.sav \
  --model_path $root_path/pipeline/bc_tabular_credit \
  --ground_truth_path $root_path/data/sample_bc_pipeline_credit_ytest_data.sav \
  --ground_truth default \
  --model_type CLASSIFICATION \
  --run_pipeline

Run the following bash script to execute the plugin on image data:

#!/bin/bash

root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_robustness_toolbox \
  --data_path $root_path/data/raw_fashion_image_10/0.png \
  --model_path $root_path/pipeline/mc_image_fashion \
  --ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --ground_truth label \
  --model_type CLASSIFICATION \
  --run_pipeline \
  --annotated_ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --file_name_label file_name

If the algorithm runs successfully, the results of the test will be saved in an output folder.

Develop plugin locally

Assuming aiverify-test-engine has already been installed in the virtual environment, run the following bash script to install the plugin and execute a test:

#!/bin/bash

# setup virtual environment
python3 -m venv .venv
source .venv/bin/activate

# install plugin
cd aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox
pip install .

python -m aiverify_robustness_toolbox --data_path  <data_path> --model_path <model_path> --ground_truth_path <ground_truth_path> --ground_truth <str> --model_type CLASSIFICATION --run_pipeline --annotated_ground_truth_path <str> --file_name_label <str>

Build Plugin

cd aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox
hatch build

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

cd aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox
pytest .

Run using Docker

In the aiverify root directory, run the below command to build the docker image

docker build -t aiverify-robustness-toolbox -f stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox/Dockerfile .

Run the below bash script to run the algorithm

#!/bin/bash
docker run \
  -v $(pwd)/stock-plugins/user_defined_files:/input \
  -v $(pwd)/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox/output:/app/aiverify/output \
  aiverify-robustness-toolbox \
  --data_path /input/data/raw_fashion_image_10 \
  --model_path /input/pipeline/mc_image_fashion \
  --ground_truth_path /input/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --ground_truth label \
  --model_type CLASSIFICATION \
  --run_pipeline \
  --annotated_ground_truth_path /input/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --file_name_label file_name

If the algorithm runs successfully, the results of the test will be saved in an output folder in the algorithm directory.

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

docker run \
  --entrypoint python3 \
  -w /app/aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox \
  aiverify-robustness-toolbox \
  -m pytest .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aiverify_robustness_toolbox-2.2.0.tar.gz (24.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aiverify_robustness_toolbox-2.2.0-py3-none-any.whl (27.6 kB view details)

Uploaded Python 3

File details

Details for the file aiverify_robustness_toolbox-2.2.0.tar.gz.

File metadata

File hashes

Hashes for aiverify_robustness_toolbox-2.2.0.tar.gz
Algorithm Hash digest
SHA256 84beb854e0a4e8308dcb2c3031a855cfe7261164f137f7052e5df8af9b782d85
MD5 0b8c6a109ba17c7d99b972c44f5c73c0
BLAKE2b-256 070771d554102a5726b3ebe8bb5080e8d24dc1d277d496958a839baeffae59e7

See more details on using hashes here.

File details

Details for the file aiverify_robustness_toolbox-2.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aiverify_robustness_toolbox-2.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e283545f8a0c9087340e0994b962231931c51bd301bf90c35de855f02ba38a18
MD5 d8e25f497d803edfbb53a2d6ac4f9846
BLAKE2b-256 f1e2c3bd7f2d93f5ae45a0a88e59ae33c45da8eac8a76de3e3925ac32442e535

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page