SecML-Torch Library
Project description
SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models
SecML-Torch (SecMLT) is an open-source Python library designed to facilitate research in the area of Adversarial Machine Learning (AML) and robustness evaluation. The library provides a simple yet powerful interface for generating various types of adversarial examples, as well as tools for evaluating the robustness of machine learning models against such attacks.
Installation
You can install SecMLT via pip:
pip install secml-torch
This will install the core version of SecMLT, including only the main functionalities such as native implementation of attacks and PyTorch wrappers.
Install with extras
The library can be installed together with other plugins that enable further functionalities.
- Foolbox, a Python toolbox to create adversarial examples.
- Tensorboard, a visualization toolkit for machine learning experimentation.
- Adversarial Library, a powerful library of various adversarial attacks resources in PyTorch.
Install one or more extras with the command:
pip install secml-torch[foolbox,tensorboard, adv_lib]
Key Features
- Built for Deep Learning: SecMLT is compatible with the popular machine learning framework PyTorch.
- Various types of adversarial attacks: SecMLT includes support for a wide range of attack methods (evasion, poisoning, ...) such as different implementations imported from popular AML libraries (Foolbox, Adversarial Library).
- Customizable attacks: SecMLT offers several levels of analysis for the models, including modular implementations of existing attacks to extend with different loss functions, optimizers, and more.
- Attack debugging: Built-in debugging of evaluations by logging events and metrics along the attack runs (even on Tensorboard).
Usage
Here's a brief example of using SecMLT to evaluate the robustness of a trained classifier:
from secmlt.adv.evasion.pgd import PGD
from secmlt.metrics.classification import Accuracy
from secmlt.models.pytorch.base_pytorch_nn import BasePytorchClassifier
model = ...
torch_data_loader = ...
# Wrap model
model = BasePytorchClassifier(model)
# create and run attack
attack = PGD(
perturbation_model="l2",
epsilon=0.4,
num_steps=100,
step_size=0.01,
)
adversarial_loader = attack(model, torch_data_loader)
# Test accuracy on adversarial examples
robust_accuracy = Accuracy()(model, adversarial_loader)
For more detailed usage instructions and examples, please refer to the official documentation or to the examples.
Contributing
We welcome contributions from the research community to expand the library's capabilities or add new features. If you would like to contribute to SecMLT, please follow our contribution guidelines.
Contributors
maurapintor |
zangobot |
lucascionis |
Acknowledgements
SecML has been partially developed with the support of European Union’s ELSA – European Lighthouse on Secure and Safe AI, Horizon Europe, grant agreement No. 101070617, and Sec4AI4Sec - Cybersecurity for AI-Augmented Systems, Horizon Europe, grant agreement No. 101120393.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file secml_torch-1.2.2.tar.gz
.
File metadata
- Download URL: secml_torch-1.2.2.tar.gz
- Upload date:
- Size: 27.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0a4910970b23e5b7a0a07ec58bf220fd5dac199a0c9af611319b6d1312ce0a79 |
|
MD5 | 175ef15aa88f958b7ec61944fb39242a |
|
BLAKE2b-256 | c27f86e3626f5aa51e1ecbdfb5ebb5028562e6862de3a923fce96058ece5976b |
File details
Details for the file secml_torch-1.2.2-py3-none-any.whl
.
File metadata
- Download URL: secml_torch-1.2.2-py3-none-any.whl
- Upload date:
- Size: 43.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 45e3e68ad3c3d6150f6c9add9d32cca2cb91345821bcd03a8e589acdc2098aa5 |
|
MD5 | d2f65220f174f42b3dde6ac92e7b4f13 |
|
BLAKE2b-256 | 489f1cf9261f18cb1a972f226b4c65eb86692971dc2212445675484432d97955 |