Skip to main content

No project description provided

Project description

AIML Logo

Adversarial Insight ML (AIML)

PyPI version Python version License Code style Documentation

“Why does your machine lie?”

Adversarial Insight ML (AIML) is a python package that evaluates the robustness of image classification models against adversarial attacks. AIML provides the functionality to automatically test your models against generated adversarial examples and outputs precise, insightful and robust feedback based on the several attack methods we have carefully chosen. Furthermore, AIML aims to be straightforward and beginner-friendly to allow non-technical users to take full advantage of its functionalities.

For more information, you can also visit the PyPI page and the documentation page.

Table of Contents

Installation

To install Adversarial Insight ML, you can use pip:

pip install adversarial-insight-ml

Usage

Here's a simple overview of the usage of our package:

img overview

You can evaluate your model with the evaluate() function:

from aiml.evaluation.evaluate import evaluate

evaluate(model, test_dataset)

The evaluate() function has two required parameters:

  • input_model (str or model): A string of the name of the machine learning model or the machine learning model itself.
  • input_test_data (str or dataset): A string of the name of the testing dataset or the testing dataset itself.

The evaluate() function has the following optional parameters:

  • input_train_data (str or dataset, optional): A string of the name of the training dataset or the training dataset itself (default is None).
  • input_shape (tuple, optional): Shape of input data (default is None).
  • clip_values (tuple, optional): Range of input data values (default is None).
  • nb_classes (int, optional): Number of classes in the dataset (default is None).
  • batch_size_attack (int, optional): Batch size for attack testing (default is 64).
  • num_threads_attack (int, optional): Number of threads for attack testing (default is 0).
  • batch_size_train (int, optional): Batch size for training data (default is 64).
  • batch_size_test (int, optional): Batch size for test data (default is 64).
  • num_workers (int, optional): Number of workers to use for data loading (default is half of the available CPU cores).
  • dry (bool, optional): When True, the code should only test one example.
  • attack_para_list (list, optional): List of parameter combinations for the attack.

See the demos in examples/ directory for usage in action:

Features

After evaluating your model with evaluate() function, we provide the following insights:

  • Summary of adversarial attacks performed, found in a text file named attack_evaluation_result.txt followed by date. For example: Result Example
  • Samples of the images can be found in a directory img/ followed by date, for example:

    img overview sample image

Contributing

Code Style
Always adhere to the PEP 8 style guide for writing Python code, allowing upto 99 characters per line as the absolute maximum. Alternatively, just use black.

Commit Messages
When making changes to the codebase, please refer to the Documentation/SubmittingPatches in the Git repo:

  • Write commit messages in present tense and imperative mood, e.g., "Add feature" instead of "Added feature" or "Adding feature."
  • Craft your messages as if you're giving orders to the codebase to change its behaviour.

Branching
We conform to a variation of the "GitHub Flow'' convention, but not strictly. For example, see the following types of branches:

  • main: This branch is always deployable and reflects the production state.
  • bugfix/*: For bug fixes.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgements

We extend our sincere appreciation to the following individuals who have been instrumental in the success of this project:

Firstly, our client Mr. Luke Chang. His invaluable guidance and insights guided us from the beginning through every phase, ensuring our work remained aligned with practical needs. This project would not have been possible without his efforts.

We'd also like to express our gratitude to Dr. Asma Shakil, who has coordinated and provided an opportunity for us to work together on this project.

Thank you for being part of this journey.

Warm regards, Team 7

Contacts

Sungjae Jang sjan260@aucklanduni.ac.nz
Takuya Saegusa tsae032@aucklanduni.ac.nz
Haozhe Wei hwei313@aucklanduni.ac.nz
Yuming Zhou yzho739@aucklanduni.ac.nz
Terence Zhang tzha820@aucklanduni.ac.nz

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

adversarial-insight-ml-0.2.2.tar.gz (5.2 MB view details)

Uploaded Source

Built Distribution

adversarial_insight_ml-0.2.2-py3-none-any.whl (24.6 kB view details)

Uploaded Python 3

File details

Details for the file adversarial-insight-ml-0.2.2.tar.gz.

File metadata

File hashes

Hashes for adversarial-insight-ml-0.2.2.tar.gz
Algorithm Hash digest
SHA256 3b6cfd1c74983ffaeebb3f00ffb5f062e16eb918022cf1f95e20cc1ddb288e83
MD5 6f2a988bdd2780e956e5a64e35e69f79
BLAKE2b-256 0faa0bc30b851e8b2c9fc245614abc8bdd0a49a5f031439f54146af797ab14f8

See more details on using hashes here.

File details

Details for the file adversarial_insight_ml-0.2.2-py3-none-any.whl.

File metadata

File hashes

Hashes for adversarial_insight_ml-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 8d155ac750f5ef2867a25e1e33e068fbe70171202f052a6633ae175ab77700b7
MD5 48f64813254876af504a1b2a75ffc40d
BLAKE2b-256 2331e67e110c61b8ff93c0a75a0012efb2679acca19502e2d7e383952ca9489f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page