Buddhu is a Adversarial examples generation library
Project description
moorkh : Adversarial Attacks in Pytorch
moorkh is a Pytorch library for generating adversarial examples with full support for batches of images in all attacks.
About the name
The name moorkh is a Hindi word meaning Fool in English, that's what we are making to Neural networks by generating advesarial examples. Although we also do so for making them more robust.
Usage
Installation
pip install moorkh
orgit clone https://github.com/akshay-gupta123/moorkh
import moorkh
norm_layer = moorkh.Normalize(mean,std)
model = nn.Sequential(
norm_layer,
model
)
model.eval()
attak = moorkh.FGSM(model)
adversarial_images = attack(images, labels)
Implemented Attacks
EXPLAINING AND HARNESSING ADVERSARIAL EXAMPLES: FGSM
ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD: IFGSM
ON THE LIMITATION OF CONVULATIONSAL NEURAL NETWORK IN RECOGNIZING NEGATIVE IMAGES: Semantic
ADDING NOISE: Noise
TOWARDS DEEP LEARNING MODELS RESISTANT TO ADVERSARIAL ATTACKS: PGD\L2
ESEMBLE ADVERSAIAL TRAINING: ATTACKS and DEFENSE: RFGSM
To-Do's
- Adding more Attacks
- Making Documentation
- Adding demo notebooks
- Adding Summaries of Implemented papers(for my own undestanding)
Contribution
This library is developed as a part of my learning, if you find any bug feel free to create a PR. All kind of contributions are always welcome!
References
- Adversarial=Robustness-Toolbox by IBM.
- Foolbox by Bethgelab.
- Cleverhans by Google brain
- Reliable and Interpretable Artificial Intelligence A Eth Zurich course
- Adversarial Robustness - Theory and Practice Tutorial by Zico Kolter and Aleksander Madry
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
moorkh-0.0.2.tar.gz
(6.9 kB
view hashes)
Built Distribution
moorkh-0.0.2-py3-none-any.whl
(9.5 kB
view hashes)