IBM Adversarial machine learning toolbox
Project description
Adversarial Robustness Toolbox (ART v0.3.0)
This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.
The library is still under development. Feedback, bug reports and extensions are highly appreciated. Get in touch with us on Slack (invite here)!
Supported attack and defense methods
The library contains implementations of the following attacks:
- DeepFool (Moosavi-Dezfooli et al., 2015)
- Fast Gradient Method (Goodfellow et al., 2014)
- Basic Iterative Method (Kurakin et al., 2016)
- Jacobian Saliency Map (Papernot et al., 2016)
- Universal Perturbation (Moosavi-Dezfooli et al., 2016)
- Virtual Adversarial Method (Miyato et al., 2015)
- C&W Attack (Carlini and Wagner, 2016)
- NewtonFool (Jang et al., 2017)
The following defense methods are also supported:
- Feature squeezing (Xu et al., 2017)
- Spatial smoothing (Xu et al., 2017)
- Label smoothing (Warde-Farley and Goodfellow, 2016)
- Adversarial training (Szegedy et al., 2013)
- Virtual adversarial training (Miyato et al., 2015)
- Gaussian data augmentation (Zantedeschi et al., 2017)
Setup
Installation with pip
The toolbox is designed to run with Python 2 and 3.
The library can be installed from the PyPi repository using pip
:
pip install adversarial-robustness-toolbox
Manual installation
For the most recent version of the library, either download the source code or clone the repository in your directory of choice:
git clone https://github.com/IBM/adversarial-robustness-toolbox
To install ART, do the following in the project folder:
pip install .
The library comes with a basic set of unit tests. To check your install, you can run all the unit tests by calling the test script in the install folder:
bash run_tests.sh
Running ART
Some examples of how to use ART when writing your own code can be found in the examples
folder. See examples/README.md
for more information about what each example does. To run an example, use the following command:
python examples/<example_name>.py
The notebooks
folder contains Jupyter notebooks with detailed walkthroughs of some usage scenarios.
Citing ART
If you use ART for research, please consider citing the following reference paper:
@article{art2018,
title = {Adversarial Robustness Toolbox v0.3.0},
author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
journal = {CoRR},
volume = {1807.01069}
year = {2018},
url = {https://arxiv.org/pdf/1807.01069}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Adversarial Robustness Toolbox-0.3.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | baff214812e7584bdfaff6976d88020c0d40532a7fc9b10c8a09cc12970c6bd6 |
|
MD5 | 638c1f2a3c5af0b94e0868c441746e49 |
|
BLAKE2b-256 | 6d57ba51b81094fd2769bcd6978ebe9aed7363a4c31a081b64ee9482fde27cdd |
Hashes for Adversarial_Robustness_Toolbox-0.3.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 468ab1e5ecfe382e2699a6903321d66c1a78dbfd8b42384157db4ee6b3a3caa5 |
|
MD5 | 59b109834d726190655a10f907719278 |
|
BLAKE2b-256 | 3194aabfafdf4ce5b0995a8a49765255886a798f663b4ee0b63e24b7e2c33b69 |