An evaluation tool for ML models defense against adversarial attack
Project description
Adversarial Insight ML (AIML)
“Why does your machine lie?”
Adversarial Insight ML (AIML) is a PyPI package that evaluates the robustness of machine learning models in image classification against adversarial attacks. The final program in the package should automatically test potential adversarial attacks against each given machine learning model and give users accurate, efficient and robust feedback through several benchmarks we develop. Furthermore, the package should be designed to allow non-technical users to use it as well.
For more information, you can also visit the PyPI page.
Table of Contents
Installation
To install Adversarial Insight ML, you can use pip:
pip install adversarial-insight-ml
Usage
Instructions on how to use your project. Provide examples or code snippets to demonstrate its functionality.
Features
Highlight the key features or functionalities of your project. List them in a concise and easy-to-understand manner.
Contributing
Code Style
Always adhere to the PEP 8 style guide for writing Python code, allowing upto 99 characters per line as the absolute maximum. Alternatively, just use black.
Commit Messages
When making changes to the codebase, please refer to the Documentation/SubmittingPatches in the Git repo:
- Write commit messages in present tense and imperative mood, e.g., "Add feature" instead of "Added feature" or "Adding feature."
- Craft your messages as if you're giving orders to the codebase to change its behaviour.
Branching
We conform to a variation of the "GitHub Flow'' convention, but not strictly. For example, see the following types of branches:
- main: This branch is always deployable and reflects the production state.
- bugfix/*: For bug fixes.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgements
We extend our sincere appreciation to the following individuals and groups who have been instrumental in the success of this project:
Firstly, our client Luke Chang. His invaluable guidance and insights guided us from the beginning through every phase, ensuring our work remained aligned with practical needs. This project would not have been possible without his efforts.
We'd also like to express gratitude to the teaching staff for COMPSCI 399 at The University of Auckland, including Dr Asma Shakil, who has coordinated and provided an opportunity for us to work together on this project.
Thank you for being part of this journey.
Warm regards, Team 7
Contact
Terence Zhang tzha820@aucklanduni.ac.nz
Yuming Zhou yzho739@aucklanduni.ac.nz
Sungjae Jang sjan260@aucklanduni.ac.nz
Takuya Saegusa tsae032@aucklanduni.ac.nz
Haozhe Wei hwei313@aucklanduni.ac.nz
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for adversarial-insight-ml-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4ea71d6fa9ddf6c5cdc62a314fe053717b93c63ae29166c21a58306f2a0c55d3 |
|
MD5 | 151817f60111321dbe26d1e28f73b3ca |
|
BLAKE2b-256 | 314494aae335c15e8c7e8b086fc1a1c7c0b5a6fa876f649b4a0544bb3953c2d5 |
Hashes for adversarial_insight_ml-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | ce77de5ee8e1114e399092ca1aa7e8c5948455af8f296ff9b21f56eea32e569f |
|
MD5 | 7d24a990211d5ed48578fd0b38999858 |
|
BLAKE2b-256 | b987e2d63d6c62cbb8f404187b611d0182789affef34e71d82b2e23353d70099 |