Skip to main content

A set of adversarial attacks implemented in PyTorch

Project description


🛡 torchattack - A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.

pip install torchattack

Usage

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

Load a pretrained model to attack from either torchvision or timm.

from torchattack import AttackModel

# Load a model with `AttackModel`
model = AttackModel.from_pretrained(model_name='resnet50', device=device)
# `AttackModel` automatically attach the model's `transform` and `normalize` functions
transform, normalize = model.transform, model.normalize

# Additionally, to explicitly specify where to load the pretrained model from (timm or torchvision),
# prepend the model name with 'timm/' or 'tv/' respectively, or use the `from_timm` argument, e.g.
vit_b16 = AttackModel.from_pretrained(model_name='timm/vit_base_patch16_224', device=device)
inv_v3 = AttackModel.from_pretrained(model_name='tv/inception_v3', device=device)
pit_b = AttackModel.from_pretrained(model_name='pit_b_224', device=device, from_timm=True)

Initialize an attack by importing its attack class.

from torchattack import FGSM, MIFGSM

# Initialize an attack
attack = FGSM(model, normalize, device)

# Initialize an attack with extra params
attack = MIFGSM(model, normalize, device, eps=0.03, steps=10, decay=1.0)

Initialize an attack by its name with create_attack().

from torchattack import create_attack

# Initialize FGSM attack with create_attack
attack = create_attack('FGSM', model, normalize, device)

# Initialize PGD attack with specific eps with create_attack
attack = create_attack('PGD', model, normalize, device, eps=0.03)

# Initialize MI-FGSM attack with extra args with create_attack
attack_args = {'steps': 10, 'decay': 1.0}
attack = create_attack('MIFGSM', model, normalize, device, eps=0.03, attack_args=attack_args)

Check out torchattack.eval.runner for a full example.

Attacks

Name Class Name Publication Paper (Open Access)
Gradient-based attacks
FGSM FGSM ICLR 2015 Explaining and Harnessing Adversarial Examples
PGD PGD ICLR 2018 Towards Deep Learning Models Resistant to Adversarial Attacks
PGD (L2) PGDL2 ICLR 2018 Towards Deep Learning Models Resistant to Adversarial Attacks
MI-FGSM MIFGSM CVPR 2018 Boosting Adversarial Attacks with Momentum
DI-FGSM DIFGSM CVPR 2019 Improving Transferability of Adversarial Examples with Input Diversity
TI-FGSM TIFGSM CVPR 2019 Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
NI-FGSM NIFGSM ICLR 2020 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
SI-NI-FGSM SINIFGSM ICLR 2020 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
DR DR CVPR 2020 Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction
VMI-FGSM VMIFGSM CVPR 2021 Enhancing the Transferability of Adversarial Attacks through Variance Tuning
VNI-FGSM VNIFGSM CVPR 2021 Enhancing the Transferability of Adversarial Attacks through Variance Tuning
Admix Admix ICCV 2021 Admix: Enhancing the Transferability of Adversarial Attacks
FIA FIA ICCV 2021 Feature Importance-aware Transferable Adversarial Attacks
PNA-PatchOut PNAPatchOut AAAI 2022 Towards Transferable Adversarial Attacks on Vision Transformers
NAA NAA CVPR 2022 Improving Adversarial Transferability via Neuron Attribution-Based Attacks
SSA SSA ECCV 2022 Frequency Domain Model Augmentation for Adversarial Attack
TGR TGR CVPR 2023 Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization
ILPD ILPD NeurIPS 2023 Improving Adversarial Transferability via Intermediate-level Perturbation Decay
DeCoWA DeCoWA AAAI 2024 Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping
VDC VDC AAAI 2024 Improving the Adversarial Transferability of Vision Transformers with Virtual Dense Connection
Generative attacks
CDA CDA NeurIPS 2019 Cross-Domain Transferability of Adversarial Perturbations
LTP LTP NeurIPS 2021 Learning Transferable Adversarial Perturbations
BIA BIA ICLR 2022 Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
Others
DeepFool DeepFool CVPR 2016 DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
GeoDA GeoDA CVPR 2020 GeoDA: A Geometric Framework for Black-box Adversarial Attacks
SSP SSP CVPR 2020 A Self-supervised Approach for Adversarial Robustness

Development

# Create a virtual environment
python -m venv .venv
source .venv/bin/activate

# Install deps with dev extras
python -m pip install -r requirements.txt
python -m pip install -e ".[dev]"

License

MIT

Related

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchattack-1.1.0.tar.gz (46.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchattack-1.1.0-py3-none-any.whl (71.8 kB view details)

Uploaded Python 3

File details

Details for the file torchattack-1.1.0.tar.gz.

File metadata

  • Download URL: torchattack-1.1.0.tar.gz
  • Upload date:
  • Size: 46.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for torchattack-1.1.0.tar.gz
Algorithm Hash digest
SHA256 4cf4f738221e138d003ce425e95dcd9907a301fe0f29e4e16354fa5e10a3c252
MD5 b99a65186d6d79196872e21b6ad942bd
BLAKE2b-256 35c6595595e3208282864aba66f1f915100b6372af29e0efd79081e8fb717167

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchattack-1.1.0.tar.gz:

Publisher: pypi-publish.yml on spencerwooo/torchattack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchattack-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: torchattack-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 71.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for torchattack-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2a48a5ea60063ad909683fe95a35d45fc36024117ea8f4e2ce64b62809bd2642
MD5 74f7a4bcc25a3c90dcb1af9683eb807f
BLAKE2b-256 aac4cfa4441805b44d68c236c3ed250b75e27d5b4fa6ea6cc5f123818b64a8c7

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchattack-1.1.0-py3-none-any.whl:

Publisher: pypi-publish.yml on spencerwooo/torchattack

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page