Skip to main content

SecML-Torch Library

Project description

secml-torch   

SecML-Torch: A Library for Robustness Evaluation of Deep Learning Models

pypi py_versions coverage docs

SecML-Torch (SecMLT) is an open-source Python library designed to facilitate research in the area of Adversarial Machine Learning (AML) and robustness evaluation. The library provides a simple yet powerful interface for generating various types of adversarial examples, as well as tools for evaluating the robustness of machine learning models against such attacks.

Installation

You can install SecMLT via pip:

pip install secml-torch

This will install the core version of SecMLT, including only the main functionalities such as native implementation of attacks and PyTorch wrappers.

Install with extras

The library can be installed together with other plugins that enable further functionalities.

  • Foolbox, a Python toolbox to create adversarial examples.
  • Tensorboard, a visualization toolkit for machine learning experimentation.
  • Adversarial Library, a powerful library of various adversarial attacks resources in PyTorch.

Install one or more extras with the command:

pip install secml-torch[foolbox,tensorboard,adv_lib]

Key Features

SecML-Torch (SecMLT) is a PyTorch-native toolkit for evaluating and improving adversarial robustness. It provides:

  • Built-in support for evaluating PyTorch models.
  • Efficient native implementations of common evasion attacks (e.g. PGD, FMN), built directly for PyTorch, and poisoning/backdoor attacks.
  • Wrappers for external libraries (Foolbox, Adversarial Library) so you can run and compare attacks from a single interface.
  • Modular design to build adaptive/custom attacks to swap losses, optimizers, perturbation models, and add EoT easily with a few lines of code.
  • Attack ensembling modules to obtain worst-case per-sample robustness evaluations.
  • Robustness evaluation tools including metrics, logging, and trackers to ensure reproducibility and easy reporting.
  • Attacl debugging support such as built-in hooks and TensorBoard integration to monitor and inspect attack behavior.

Check out the tutorials to see SecML-Torch in action.

Category Attack / Attack Type Native Implementation in SecML-Torch Wrapped / Imported / Backend Alternatives
Advantages: GPU-native, efficient, modular/customizable, debugging tools Advantages: expands the attack catalogue, easy cross-checks
Test time PGD (fixed-epsilon, iterative attack) ✔ Native implementation ✔ Also via backend wrappers (Foolbox, Adversarial Library).
Test time FMN (minimum-norm, iterative attack) ✔ Native implementation ✔ Also via backend wrappers (Foolbox, Adversarial Library).
Test time DDN (minimum-norm, iterative attack) ✔ Native implementation ✔ Also via backend wrappers (Foolbox, Adversarial Library).
Test time Other Evasion Attacks Work in progress ✔ Available via backend wrappers (Foolbox, Adversarial Library).
Training time Backdoor ✔ Native implementation -
Training time Label Flip Poisoning ✔ Native implementation -

Check out what's cooking! Have a look at our roadmap!

Usage

Here's a brief example of using SecMLT to evaluate the robustness of a trained classifier:

from secmlt.adv.evasion.pgd import PGD
from secmlt.metrics.classification import Accuracy
from secmlt.models.pytorch.base_pytorch_nn import BasePyTorchClassifier


model = ...
torch_data_loader = ...

# Wrap model
model = BasePyTorchClassifier(model)

# create and run attack
attack = PGD(
    perturbation_model="l2",
    epsilon=0.4,
    num_steps=100,
    step_size=0.01,
)

adversarial_loader = attack(model, torch_data_loader)

# Test accuracy on adversarial examples
robust_accuracy = Accuracy()(model, adversarial_loader)

For more detailed usage instructions and examples, please refer to the official documentation or to the examples.

Contributing

We welcome contributions from the research community to expand the library's capabilities or add new features. If you would like to contribute to SecMLT, please follow our contribution guidelines.

Acknowledgements

SecML has been partially developed with the support of European Union’s ELSA – European Lighthouse on Secure and Safe AI, Horizon Europe, grant agreement No. 101070617, Sec4AI4Sec - Cybersecurity for AI-Augmented Systems, Horizon Europe, grant agreement No. 101120393, and CoEvolution - A Comprehensive Trustworthy Framework for Connected Machine Learning and Secure Interconnected AI Solutions, Horizon Europe, grant agreement No. 101168560, and by the project SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU.

sec4ai4sec    elsa    coevolution    serics    safer europe

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

secml_torch-1.4.tar.gz (41.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

secml_torch-1.4-py3-none-any.whl (67.4 kB view details)

Uploaded Python 3

File details

Details for the file secml_torch-1.4.tar.gz.

File metadata

  • Download URL: secml_torch-1.4.tar.gz
  • Upload date:
  • Size: 41.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for secml_torch-1.4.tar.gz
Algorithm Hash digest
SHA256 07a02d3a33c9240d0fac182dd8e03000df539b8363b2770bd904964302e95314
MD5 da5308ba39b4f125e8cc0934089f3ce1
BLAKE2b-256 b90f35c4f3342e501805979a7661700a724eaeed274626b48dcaaab13e06e184

See more details on using hashes here.

Provenance

The following attestation bundles were made for secml_torch-1.4.tar.gz:

Publisher: release.yml on pralab/secml-torch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file secml_torch-1.4-py3-none-any.whl.

File metadata

  • Download URL: secml_torch-1.4-py3-none-any.whl
  • Upload date:
  • Size: 67.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for secml_torch-1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 244dc231eefd232fb6c0bb28cc7b3c06b10453c9cb0769a93fa78cd287f7c622
MD5 4e77446d30e6b783484306ed70f4e26e
BLAKE2b-256 ce40cfa17e20771c7212ed6dec4e8b2a8b244be966e4104b03ac8f099192cf27

See more details on using hashes here.

Provenance

The following attestation bundles were made for secml_torch-1.4-py3-none-any.whl:

Publisher: release.yml on pralab/secml-torch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page