Skip to main content

Toolbox for adversarial machine learning.

Project description

Adversarial Robustness 360 Toolbox (ART) v1.1


Build Status Documentation Status GitHub version Language grade: Python Total alerts

中文README请按此处

Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats (including evasion, extraction and poisoning) and helps making AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately crafted to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defences and test them with adversarial attacks.

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial examples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. ART includes attacks for testing defenses with state-of-the-art threat models.

Documentation of ART: https://adversarial-robustness-toolbox.readthedocs.io

Get started with examples and tutorials

The library is under continuous development. Feedback, bug reports and contributions are very welcome. Get in touch with us on Slack (invite here)!

Supported Machine Learning Libraries and Applications

Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications

Evasion Attacks:

Extraction Attacks:

Poisoning Attacks:

Defences:

Extraction Defences:

Robustness Metrics, Certifications and Verifications:

Detection of Adversarial Examples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer
  • Detector based on Fast Generalized Subset Scan (Speakman et al., 2018)

Detection of Poisoning Attacks:

Setup

Installation with pip

The toolbox is designed and tested to run with Python 3. ART can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

The most recent version of ART can be downloaded or cloned from this repository:

git clone https://github.com/IBM/adversarial-robustness-toolbox

Install ART with the following command from the project folder art:

pip install .

ART provides unit tests that can be run with the following command:

bash run_tests.sh

Get Started with ART

Examples of using ART can be found in examples and examples/README.md provides an overview and additional information. It contains a minimal example for each machine learning framework. All examples can be run with the following command:

python examples/<example_name>.py

More detailed examples and tutorials are located in notebooks and notebooks/README.md provides and overview and more information.

Contributing

Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the Adversarial Robustness 360 Toolbox so that others may evaluate it fairly in their own work.

Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness 360 Toolbox, we ask that you follow the PEP 8 coding standard and that you provide unit tests for the new features.

This project uses DCO. Be sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Example

git commit -s -m 'Add new feature'

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v1.1.0},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Buesser, Beat and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069},
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

adversarial_robustness_toolbox-1.1.0.tar.gz (268.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

adversarial_robustness_toolbox-1.1.0-py3-none-any.whl (436.7 kB view details)

Uploaded Python 3

File details

Details for the file adversarial_robustness_toolbox-1.1.0.tar.gz.

File metadata

  • Download URL: adversarial_robustness_toolbox-1.1.0.tar.gz
  • Upload date:
  • Size: 268.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.39.0 CPython/3.6.8

File hashes

Hashes for adversarial_robustness_toolbox-1.1.0.tar.gz
Algorithm Hash digest
SHA256 2e31191c81c7468006b60fd753e190f0239b4d99042d389a06ed97502d704a43
MD5 c6b11f1eca229e0713330d2715af78fc
BLAKE2b-256 911edbea0e0616d6c68c29e2fd648f3891d058a96f89694207dd8b44c6b41f44

See more details on using hashes here.

File details

Details for the file adversarial_robustness_toolbox-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: adversarial_robustness_toolbox-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 436.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2 requests-toolbelt/0.9.1 tqdm/4.39.0 CPython/3.6.8

File hashes

Hashes for adversarial_robustness_toolbox-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6c615cc23e582155ba9b1e390d5eedc71bdaa460e4344220fbdc388727daa7ce
MD5 51d0df017e56b5d5d65f0b6646cdccef
BLAKE2b-256 bac20d0e5887b76a506d3b805c6a9c3032a14b874b31a693fd30b0c7170f4ffc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page