Adversarial Attacks for PyTorch
Project description
Adversarial-Attacks-Pytorch
This is a lightweight repository of adversarial attacks for Pytorch.
Torchattacks is a PyTorch library that contains adversarial attacks to generate adversarial examples and to verify the robustness of deep learning models.
Table of Contents
Usage
:clipboard: Dependencies
- torch 1.2.0
- python 3.6
:hammer: Installation
pip install torchattacks
orgit clone https://github.com/Harry24k/adversairal-attacks-pytorch
import torchattacks
atk = torchattacks.PGD(model, eps = 8/255, alpha = 2/255, steps=4)
adversarial_images = atk(images, labels)
:warning: Precautions
-
All images should be scaled to [0, 1] with transform[to.Tensor()] before used in attacks. To make it easy to use adversarial attacks, a reverse-normalization is not included in the attack process. To apply an input normalization, please add a normalization layer to the model. Please refer to the demo.
-
All models should return ONLY ONE vector of
(N, C)
whereC = number of classes
. Considering most models in torchvision.models return one vector of(N,C)
, whereN
is the number of inputs andC
is thenumber of classes, torchattacks also only supports limited forms of output. Please check the shape of the model’s output carefully. -
torch.backends.cudnn.deterministic = True
to get same adversarial examples with fixed random seed. Some operations are non-deterministic with float tensors on GPU [discuss]. If you want to get same results with same inputs, please runtorch.backends.cudnn.deterministic = True
[ref].
Attacks and Papers
Implemented adversarial attacks in the papers.
The distance measure in parentheses.
- Explaining and harnessing adversarial examples (Dec 2014): Paper
- FGSM (Linf)
- DeepFool: a simple and accurate method to fool deep neural networks (Nov 2015): Paper
- DeepFool (L2)
- Adversarial Examples in the Physical World (Jul 2016): Paper
- BIM or iterative-FSGM (Linf)
- Towards Evaluating the Robustness of Neural Networks (Aug 2016): Paper
- CW (L2)
- Ensemble Adversarial Traning: Attacks and Defences (May 2017): Paper
- RFGSM (Linf)
- Towards Deep Learning Models Resistant to Adversarial Attacks (Jun 2017): Paper
- PGD (Linf)
- Boosting Adversarial Attacks with Momentum (Oct 2017): Paper
- MIFGSM (Linf) - :heart_eyes: Contributor zhuangzi926
- Theoretically Principled Trade-off between Robustness and Accuracy (Jan 2019): Paper
- TPGD (Linf)
- Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" (Jul 2019): Paper
- APGD or EOT + PGD (Linf)
- Fast is better than free: Revisiting adversarial training (Jan 2020): Paper
- FFGSM (Linf)
Clean | Adversarial |
---|---|
Documentation
:book: ReadTheDocs
Here is a documentation for this package.
:bell: Citation
If you want to cite this package, please use the following BibTex:
@article{kim2020torchattacks,
title={Torchattacks: A Pytorch Repository for Adversarial Attacks},
author={Kim, Hoki},
journal={arXiv preprint arXiv:2010.01950},
year={2020}
}
:rocket: Demos
- White Box Attack with Imagenet (code, nbviewer): Using torchattacks to make adversarial examples with the Imagenet dataset to fool Inception v3.
- Black Box Attack with CIFAR10 (code, nbviewer): This demo provides an example of black box attack with two different models. First, make adversarial datasets from a holdout model with CIFAR10 and save it as torch dataset. Second, use the adversarial datasets to attack a target model.
- Adversairal Training with MNIST (code, nbviewer): This code shows how to do adversarial training with this repository. The MNIST dataset and a custom model are used in this code. The adversarial training is performed with PGD, and then FGSM is applied to evaluate the model.
:anchor: Update Records
Update records can be found in here.
Contribution
Contribution is always welcome! Use pull requests :blush:
Recommended Sites and Packages
-
Adversarial Attack Packages:
- https://github.com/IBM/adversarial-robustness-toolbox: Adversarial attack and defense package made by IBM. TensorFlow, Keras, Pyotrch available.
- https://github.com/bethgelab/foolbox: Adversarial attack package made by Bethge Lab. TensorFlow, Pyotrch available.
- https://github.com/tensorflow/cleverhans: Adversarial attack package made by Google Brain. TensorFlow available.
- https://github.com/BorealisAI/advertorch: Adversarial attack package made by BorealisAI. Pytorch available.
- https://github.com/DSE-MSU/DeepRobust: Adversarial attack (especially on GNN) package made by BorealisAI. Pytorch available.
-
Adversarial Defense Leaderboard:
-
Adversarial Attack and Defense Papers:
- https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html: A Complete List of All (arXiv) Adversarial Example Papers made by Nicholas Carlini.
- https://github.com/chawins/Adversarial-Examples-Reading-List: Adversarial Examples Reading List made by Chawin Sitawarin.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Hashes for torchattacks-2.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d2b025a224cbc917e523a2c7601e0a17188a0f8fea565fc855511164c27fe3a |
|
MD5 | 1f0cbf63def8de650b5b34933cdb6f1f |
|
BLAKE2b-256 | e4b4df1f545c6cc86607f2bb7254f372550ee8a03f93d3ab49e71430116f9ab5 |