Adversarial Attacks for PyTorch
This is a lightweight repository of adversarial attacks for Pytorch. There are frequently used attacks methods and some utils. The aim is to provide use adversarial images wihout bothering.
- torch 1.0.0
- python 3.6
pip install torchattacksor
git clone https://github.com/HarryK24/adversairal-attacks-pytorch
import attacks pgd_attack = attacks.PGD(model, eps = 4/255, alpha = 8/255) adversarial_images = pgd_attack(images, labels)
Attacks and Papers
The papers and the methods that suggested in each article with a brief summary and example. All methods in this repository are provided as CLASS, but methods in each Repo are NOT CLASS.
White Box Attack with Imagenet (code): This demo make adversarial examples with the Imagenet data to fool Inception v3. However, whole Imagenet data is too large so in this demo, so it uses only 'Giant Panda'. But users are free to add other images in the Imagenet data.
Black Box Attack with CIFAR10 (code): In this demo, there is a black box attack example with two different models. First, make adversarial datasets from a holdout model with CIFAR10. Second, use the datasets to attack a target model. An accuracy dropped from 77.77% to 5.1%. Also this code also contains 'Save & Load' example.
Adversairal Training with MNIST (code): This demo shows how to do adversarial training with this repository. MNIST and custom model are used in this code. The adversarial training is progressed with PGD Attack, and FGSM Attack is applied to test the model. An accuracy of normal images is 96.37% and an accuracy of FGSM attack is 96.11% .
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size & hash||File type||Python version||Upload date|
|torchattacks-0.3-py3-none-any.whl (9.9 kB) View hashes||Wheel||py3|
Hashes for torchattacks-0.3-py3-none-any.whl