Toolbox for adversarial machine learning.
Adversarial Robustness Toolbox (ART) v1.5
Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.).
- Technical Documentation
|- Slack, Invitation
The library is under continuous development. Feedback, bug reports and contributions are very welcome!
This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size adversarial_robustness_toolbox-1.5.1-py3-none-any.whl (890.7 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size adversarial-robustness-toolbox-1.5.1.tar.gz (1.7 MB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for adversarial_robustness_toolbox-1.5.1-py3-none-any.whl
Hashes for adversarial-robustness-toolbox-1.5.1.tar.gz