Skip to main content

IBM Adversarial machine learning toolbox

Project description

Adversarial Robustness Toolbox (ART v0.7.0)

Build Status Documentation Status GitHub version Language grade: Python Total alerts

This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. ART provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

The library is still under development. Feedback, bug reports and extensions are highly appreciated. Get in touch with us on Slack (invite here)!

Supported attacks, defences and metrics

The library contains implementations of the following evasion attacks:

The following defence methods are also supported:

ART also implements detection methods of adversarial samples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer

The following detector of poisoning attacks is also supported:

Robustness metrics:

Setup

Installation with pip

The toolbox is designed to run with Python 2 and 3. ART can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

For the most recent version of the library, either download the source code or clone the repository in your directory of choice:

git clone https://github.com/IBM/adversarial-robustness-toolbox

To install ART, do the following in the project folder:

pip install .

The library comes with a basic set of unit tests. To check your install, you can run all the unit tests by calling the test script in the install folder:

bash run_tests.sh

Running ART

Some examples of how to use ART when writing your own code can be found in the examples folder. See examples/README.md for more information about what each example does. To run an example, use the following command:

python examples/<example_name>.py

The notebooks folder contains Jupyter notebooks with detailed walkthroughs of some usage scenarios.

Contributing

Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the Adversarial Robustness Toolbox so that others may evaluate it fairly in their own work.

Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness Toolbox, we ask that you follow the PEP 8 coding standard and that you provide unit tests for the new features.

This project uses DCO. Be sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Example

git commit -s -m 'Add new feature'

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v0.7.0},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069}
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Adversarial Robustness Toolbox-0.7.0.tar.gz (135.1 kB view details)

Uploaded Source

Built Distribution

Adversarial_Robustness_Toolbox-0.7.0-py3-none-any.whl (214.3 kB view details)

Uploaded Python 3

File details

Details for the file Adversarial Robustness Toolbox-0.7.0.tar.gz.

File metadata

  • Download URL: Adversarial Robustness Toolbox-0.7.0.tar.gz
  • Upload date:
  • Size: 135.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.0

File hashes

Hashes for Adversarial Robustness Toolbox-0.7.0.tar.gz
Algorithm Hash digest
SHA256 1999a49c873e42718bfc5aa27bd9605be11aacf1b9180c093873551d77472d22
MD5 31e247723e16e5157ce4aaebfce789c4
BLAKE2b-256 6077714cedd86ae584b2c65585d346de4b3b04e3c6e0149815f08a71a0aa70e8

See more details on using hashes here.

File details

Details for the file Adversarial_Robustness_Toolbox-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: Adversarial_Robustness_Toolbox-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 214.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.6.0

File hashes

Hashes for Adversarial_Robustness_Toolbox-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c1023195085571af7de584d2c0dfd6400cdde7cfeaaf73e939f23cd27fcc4262
MD5 3a502f519b152285e4cae6d91345ce45
BLAKE2b-256 ca989b7de5636dc6681181f83b618ebcf5cce4563a5809a13a15e0dd10746965

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page