Part of AI Verify image corruption toolbox. This package includes algorithms that add environmental corruptions (rain, fog and snow) to images at different severity levels, to test the robustness of machine learning models.
Project description
Algorithm - Environment Corruptions
Description
- Robustness plugin with environment corruptions
License
- Licensed under Apache Software License 2.0
Developers
- AI Verify
Installation
Each test algorithm can now be installed via pip and run individually.
pip install aiverify-environment-corruptions
Example Usage
Run the following bash script to execute the plugin
#!/bin/bash
root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_environment_corruptions \
--data_path $root_path/data/raw_fashion_image_10 \
--model_path $root_path/pipeline/multiclass_classification_image_mnist_fashion \
--model_type CLASSIFICATION \
--ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
--ground_truth label \
--file_name_label file_name \
--set_seed 10
If the algorithm runs successfully, the results of the test will be saved in an output
folder.
Including Specific Corruptions
Usage
By default, all corruption functions are applied. You can use the --corruptions
flag to specify which functions to run.
--corruptions [FUNCTION_NAME ...]
Options
all
-> Runs all environment corruption functions (default)snow
fog
rain
Example: Applying only Snow and Rain corruptions
#!/bin/bash
root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_environment_corruptions \
--data_path $root_path/data/raw_fashion_image_10 \
--model_path $root_path/pipeline/multiclass_classification_image_mnist_fashion \
--model_type CLASSIFICATION \
--ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
--ground_truth label \
--file_name_label file_name \
--set_seed 10
--corruptions snow rain
Customizing Parameters
To fine-tune the corruption parameters, use the Environment Corruption Playground Notebook. This notebook allows you to:
✅ Visualize the effects of different corruption functions.
✅ Experiment with different parameter values.
✅ Apply custom values in the CLI using flags like:
#!/bin/bash
root_path="<PATH_TO_FOLDER>/aiverify/stock-plugins/user_defined_files"
python -m aiverify_environment_corruptions \
--data_path $root_path/data/raw_fashion_image_10 \
--model_path $root_path/pipeline/multiclass_classification_image_mnist_fashion \
--model_type CLASSIFICATION \
--ground_truth_path $root_path/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
--ground_truth label \
--file_name_label file_name \
--set_seed 10
--snow_intensity 1.0 2.0 3.0
PyTorch support
To use a custom PyTorch model with this plugin, follow the steps below:
-
Install PyTorch
Ensure you have installed a PyTorch version compatible with your model. Visit the PyTorch website for installation instructions.
-
Specify Model Path
Use the
--model_path
command-line argument to specify the path to a folder containing:- The model class definition (e.g.,
model.py
). - The model weights file (e.g.,
model_weights.pt
).
- The model class definition (e.g.,
-
Implement a
predict
FunctionYour model class must implement a
predict
function. This function should:- Accept a batch of image file paths as input.
- Return a batch of predictions.
For reference, see the sample implementation in
user_defined_files/pipeline/sample_fashion_mnist_pytorch
.
Example Directory Structure
<model_path>/
├── model.py # Contains the model class definition
├── model_weights.pt # Contains the trained model weights
Example predict
Function
# model.py
from typing import Iterable
import numpy as np
import torch
from PIL import Image
from torchvision import transforms
class CustomModel(torch.nn.Module):
def __init__(self):
super().__init__()
# Define your model architecture here
...
def forward(self, x):
# Define the forward pass
...
def predict(self, image_paths: Iterable[str]) -> np.ndarray:
transform = transforms.Compose([
transforms.Resize((224, 224)),
...,
transforms.ToTensor(),
])
images = [Image.open(path).convert("RGB") for path in image_paths]
image_tensors = torch.stack([transform(image) for image in images])
self.eval()
with torch.no_grad():
predictions = self(image_tensors).argmax(dim=1).detach().cpu().numpy()
return predictions
By following these steps, you can integrate your custom PyTorch model into the corruption plugin.
Develop plugin locally
Execute the below bash script in the project root
#!/bin/bash
# setup virtual environment
python -m venv .venv
source .venv/bin/activate
# install plugin
cd aiverify/stock-plugins/aiverify.stock.image-corruption-toolbox/algorithms/environment_corruptions/
pip install .
python -m aiverify_environment_corruptions --data_path <data_path> --model_path <model_path> --model_type CLASSIFICATION --ground_truth_path <ground_truth_path> --ground_truth <str> --file_name_label <str> --set_seed <int>
Build Plugin
cd aiverify/stock-plugins/aiverify.stock.image-corruption-toolbox/algorithms/environment_corruptions/
hatch build
Tests
Run the following steps to execute the unit and integration tests inside the tests/
folder
cd aiverify/stock-plugins/aiverify.stock.image-corruption-toolbox/algorithms/environment_corruptions/
pytest .
Run using Docker
In the aiverify root directory, run the below command to build the docker image
docker build -t aiverify-environment-corruptions -f stock-plugins/aiverify.stock.image-corruption-toolbox/algorithms/environment_corruptions/Dockerfile .
Run the below bash script to run the algorithm
#!/bin/bash
docker run \
-v $(pwd)/stock-plugins/user_defined_files:/input \
-v $(pwd)/stock-plugins/aiverify.stock.image-corruption-toolbox/algorithms/environment_corruptions/output:/app/aiverify/output \
aiverify-environment-corruptions \
--data_path /input/data/raw_fashion_image_10 \
--model_path /input/pipeline/multiclass_classification_image_mnist_fashion \
--model_type CLASSIFICATION \
--ground_truth_path /input/data/pickle_pandas_fashion_mnist_annotated_labels_10.sav \
--ground_truth label \
--file_name_label file_name \
--set_seed 10
If the algorithm runs successfully, the results of the test will be saved in an output
folder in the algorithm directory.
Tests
Run the following steps to execute the unit and integration tests inside the tests/
folder
docker run --entrypoint python3 aiverify-environment-corruptions -m pytest .
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file aiverify_environment_corruptions-2.0.0.tar.gz
.
File metadata
- Download URL: aiverify_environment_corruptions-2.0.0.tar.gz
- Upload date:
- Size: 21.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.28.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 73c1460404ac4881f23018f6bf8a7274425df1d6d7e620ba38cae5bf9f214288 |
|
MD5 | 395eef8c5a7aa9405f6bae19a83e117f |
|
BLAKE2b-256 | ca41f058b7e3c025f589e684dc3bc93b38b4ab2fa9c233631b1c9a567b0f5b8f |
File details
Details for the file aiverify_environment_corruptions-2.0.0-py3-none-any.whl
.
File metadata
- Download URL: aiverify_environment_corruptions-2.0.0-py3-none-any.whl
- Upload date:
- Size: 21.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: python-httpx/0.28.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | dc0f2ffcd5e2091cb5e4a4876759648ea28be3e38074a06a41e0d5c8ef884dcc |
|
MD5 | f7e129c7235469a7d2edd5ba4a6ce5d4 |
|
BLAKE2b-256 | 027b2d3a38e8768b99bdc9bf4bbef00e8f81ec04e7327dcd2f3eae8fa0298708 |