IBM Adversarial machine learning toolbox
Project description
Adversarial Robustness Toolbox (ART v0.6.0)
This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.
The library is still under development. Feedback, bug reports and extensions are highly appreciated. Get in touch with us on Slack (invite here)!
Supported attacks, defences and metrics
The library contains implementations of the following evasion attacks:
- DeepFool (Moosavi-Dezfooli et al., 2015)
- Fast gradient method (Goodfellow et al., 2014)
- Basic iterative method (Kurakin et al., 2016)
- Projected gradient descent (Madry et al., 2017)
- Jacobian saliency map (Papernot et al., 2016)
- Universal perturbation (Moosavi-Dezfooli et al., 2016)
- Virtual adversarial method (Miyato et al., 2015)
- C&W
L_2
andL_inf
attacks (Carlini and Wagner, 2016) - NewtonFool (Jang et al., 2017)
- Elastic net attack (Chen et al., 2017)
- Spatial transformations attack (Engstrom et al., 2017)
- Query-efficient black-box attack (Ilyas et al., 2017)
The following defence methods are also supported:
- Feature squeezing (Xu et al., 2017)
- Spatial smoothing (Xu et al., 2017)
- Label smoothing (Warde-Farley and Goodfellow, 2016)
- Adversarial training (Szegedy et al., 2013)
- Virtual adversarial training (Miyato et al., 2015)
- Gaussian data augmentation (Zantedeschi et al., 2017)
- Thermometer encoding (Buckman et al., 2018)
- Total variance minimization (Guo et al., 2018)
- JPEG compression (Dziugaite et al., 2016)
- PixelDefend (Song et al., 2017)
ART also implements detection methods of adversarial samples:
- Basic detector based on inputs
- Detector trained on the activations of a specific layer
The following detector of poisoning attacks is also supported:
- Detector based on activations analysis (Chen et al., 2018)
Robustness metrics:
- CLEVER (Weng et al., 2018)
- Empirical robustness (Moosavi-Dezfooli et al., 2015)
- Loss sensitivity (Arpit et al., 2017)
Setup
Installation with pip
The toolbox is designed to run with Python 2 and 3.
ART can be installed from the PyPi repository using pip
:
pip install adversarial-robustness-toolbox
Manual installation
For the most recent version of the library, either download the source code or clone the repository in your directory of choice:
git clone https://github.com/IBM/adversarial-robustness-toolbox
To install ART, do the following in the project folder:
pip install .
The library comes with a basic set of unit tests. To check your install, you can run all the unit tests by calling the test script in the install folder:
bash run_tests.sh
Running ART
Some examples of how to use ART when writing your own code can be found in the examples
folder. See examples/README.md
for more information about what each example does. To run an example, use the following command:
python examples/<example_name>.py
The notebooks
folder contains Jupyter notebooks with detailed walkthroughs of some usage scenarios.
Contributing
Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the Adversarial Robustness Toolbox so that others may evaluate it fairly in their own work.
Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness Toolbox, we ask that you follow the PEP 8
coding standard and that you provide unit tests for the new features.
This project uses DCO. Be sure to sign off your commits using the -s
flag or adding Signed-off-By: Name<Email>
in the commit message.
Example
git commit -s -m 'Add new feature'
Citing ART
If you use ART for research, please consider citing the following reference paper:
@article{art2018,
title = {Adversarial Robustness Toolbox v0.6.0},
author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
journal = {CoRR},
volume = {1807.01069}
year = {2018},
url = {https://arxiv.org/pdf/1807.01069}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for Adversarial Robustness Toolbox-0.6.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a841a88e4b0acd6a34603a4ce4b7294c929a9480d1daf01fe11af90b1fae821 |
|
MD5 | 509faa4f54bec935585a004d28fcd911 |
|
BLAKE2b-256 | 54ed3cfd92ec8f4110d01d36afdb916b9f4e010b9bb301885ea7e53c88d5fc4d |
Hashes for Adversarial_Robustness_Toolbox-0.6.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9b7f7fb3c62752590b4b86a25db9dad586d0f3e8458fc9c3cad8a84c8fc91c5c |
|
MD5 | a73b4636c8538e90e4183cd9e82ae3e6 |
|
BLAKE2b-256 | 3a4f8b1545bdf7e01be4a55139a1cf1ca8b3c6bb2f03d07b069c5685e7d3d3e2 |