Skip to main content

IBM Adversarial machine learning toolbox

Project description

Adversarial Robustness 360 Toolbox (ART) v1.0


Build Status Documentation Status GitHub version Language grade: Python Total alerts

中文README请按此处

Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats and helps making AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately modified to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defences and test them with adversarial attacks.

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. The attacks implemented in ART allow creating adversarial attacks against Machine Learning models which is required to test defenses with state-of-the-art threat models.

Documentation of ART: https://adversarial-robustness-toolbox.readthedocs.io

Get started with examples and tutorials

The library is under continuous development. Feedback, bug reports and contributions are very welcome. Get in touch with us on Slack (invite here)!

Supported Machine Learning Libraries and Applications

Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications

Evasion Attacks:

Poisoning Attacks:

Defences:

Robustness metrics, certifications and verifications:

Detection of adversarial samples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer
  • Detector based on Fast Generalized Subset Scan (Speakman et al., 2018)

Detectoion of poisoning attacks:

Setup

Installation with pip

The toolbox is designed and tested to run with Python 3. ART can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

The most recent version of ART can be downloaded or cloned from this repository:

git clone https://github.com/IBM/adversarial-robustness-toolbox

Install ART with the following command from the project folder art:

pip install .

ART provides unit tests that can be run with the following command:

bash run_tests.sh

Get Started with ART

Examples of using ART can be found in examples and examples/README.md provides an overview and additional information. It contains a minimal example for each machine learning framework. All examples can be run with the following command:

python examples/<example_name>.py

More detailed examples and tutorials are located in notebooks and notebooks/README.md provides and overview and more information.

Contributing

Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the Adversarial Robustness 360 Toolbox so that others may evaluate it fairly in their own work.

Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness 360 Toolbox, we ask that you follow the PEP 8 coding standard and that you provide unit tests for the new features.

This project uses DCO. Be sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Example

git commit -s -m 'Add new feature'

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v1.0.1},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Buesser, Beat and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069},
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Adversarial Robustness Toolbox-1.0.1.tar.gz (233.6 kB view details)

Uploaded Source

Built Distribution

Adversarial_Robustness_Toolbox-1.0.1-py3-none-any.whl (376.0 kB view details)

Uploaded Python 3

File details

Details for the file Adversarial Robustness Toolbox-1.0.1.tar.gz.

File metadata

  • Download URL: Adversarial Robustness Toolbox-1.0.1.tar.gz
  • Upload date:
  • Size: 233.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.6.8

File hashes

Hashes for Adversarial Robustness Toolbox-1.0.1.tar.gz
Algorithm Hash digest
SHA256 47af9a24de377a66a20be83aa2ab6108b3d9ab0a9824c133198302f75325788e
MD5 7662b3f026a36a023da6496985958b94
BLAKE2b-256 603f2cdc0a7bd97db6b6641bca0ef9c8d054603570d3cccd101fe49e8daa2ac1

See more details on using hashes here.

File details

Details for the file Adversarial_Robustness_Toolbox-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: Adversarial_Robustness_Toolbox-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 376.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.6.8

File hashes

Hashes for Adversarial_Robustness_Toolbox-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7460e261a8c0837afb7aab3147fdf5b736a914a26e29d956018d36120b9ee59f
MD5 e9f99f8ee6aa32bf0215f5a7b373e5fc
BLAKE2b-256 a726140791dead9918e3ce7a1d9ffed835606a5e2ec03dead859f1b96c023eac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page