Skip to main content

AI Verify Robustness Toolbox generates a perturbed dataset using boundary attack algorithm on the test dataset.

Project description

Algorithm - Robustness Toolbox

License

  • Licensed under Apache Software License 2.0

Developers:

  • AI Verify

Installation

Each test algorithm can now be installed via pip and run individually.

pip install aiverify-robustness-toolbox

Example Usage:

The robustness plugin supports testing of tabular and image data.

Run the following bash script to execute the plugin on tabular data:

#!/bin/bash

root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_robustness_toolbox \
  --data_path $root_path/data/sample_bc_pipeline_credit_data.sav \
  --model_path $root_path/pipeline/bc_tabular_credit \
  --ground_truth_path $root_path/data/sample_bc_pipeline_credit_ytest_data.sav \
  --ground_truth default \
  --model_type CLASSIFICATION \
  --run_pipeline

Run the following bash script to execute the plugin on image data:

#!/bin/bash

root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_robustness_toolbox \
  --data_path $root_path/data/raw_fashion_image_10/0.png \
  --model_path $root_path/pipeline/mc_image_fashion \
  --ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --ground_truth label \
  --model_type CLASSIFICATION \
  --run_pipeline \
  --annotated_ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --file_name_label file_name

If the algorithm runs successfully, the results of the test will be saved in an output folder.

Develop plugin locally

Assuming aiverify-test-engine has already been installed in the virtual environment, run the following bash script to install the plugin and execute a test:

#!/bin/bash

# setup virtual environment
python3 -m venv .venv
source .venv/bin/activate

# install plugin
cd aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox
pip install .

python -m aiverify_robustness_toolbox --data_path  <data_path> --model_path <model_path> --ground_truth_path <ground_truth_path> --ground_truth <str> --model_type CLASSIFICATION --run_pipeline --annotated_ground_truth_path <str> --file_name_label <str>

Build Plugin

cd aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox
hatch build

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

cd aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox
pytest .

Run using Docker

In the aiverify root directory, run the below command to build the docker image

docker build -t aiverify-robustness-toolbox -f stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox/Dockerfile .

Run the below bash script to run the algorithm

#!/bin/bash
docker run \
  -v $(pwd)/stock-plugins/user_defined_files:/input \
  -v $(pwd)/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox/output:/app/aiverify/output \
  aiverify-robustness-toolbox \
  --data_path /input/data/raw_fashion_image_10 \
  --model_path /input/pipeline/mc_image_fashion \
  --ground_truth_path /input/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --ground_truth label \
  --model_type CLASSIFICATION \
  --run_pipeline \
  --annotated_ground_truth_path /input/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
  --file_name_label file_name

If the algorithm runs successfully, the results of the test will be saved in an output folder in the algorithm directory.

Tests

Pytest is used as the testing framework.

Run the following steps to execute the unit and integration tests inside the tests/ folder

docker run \
  --entrypoint python3 \
  -w /app/aiverify/stock-plugins/aiverify.stock.robustness-toolbox/algorithms/robustness_toolbox \
  aiverify-robustness-toolbox \
  -m pytest .

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aiverify_robustness_toolbox-2.1.0.tar.gz (24.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aiverify_robustness_toolbox-2.1.0-py3-none-any.whl (27.6 kB view details)

Uploaded Python 3

File details

Details for the file aiverify_robustness_toolbox-2.1.0.tar.gz.

File metadata

File hashes

Hashes for aiverify_robustness_toolbox-2.1.0.tar.gz
Algorithm Hash digest
SHA256 d3ffce6797399ef5354b5da3d3337403bda51312e5bc696befaf261e2f1c8090
MD5 faa787e4464a9997489bd4cbbb24025f
BLAKE2b-256 1b00236bbc9392e128e11f2ebef7142bc38f3830b5f03b6f8f661793d00c1830

See more details on using hashes here.

File details

Details for the file aiverify_robustness_toolbox-2.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for aiverify_robustness_toolbox-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 dd0de25d4462a7adc833ddae22b3ad88df931ed069c6171dcd29220a3ff279af
MD5 fee097f74c496df00ae69830c8d0bdf4
BLAKE2b-256 a3c448beeead14391eca3ff03b7e1f707e9a00b57eddfdb6d4ed2f6e8f68d08a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page