Skip to main content

Buddhu is a Adversarial examples generation library

Project description

moorkh : Adversarial Attacks in Pytorch

moorkh is a Pytorch library for generating adversarial examples with full support for batches of images in all attacks.

About the name

The name moorkh is a Hindi word meaning Fool in English, that's what we are making to Neural networks by generating advesarial examples. Although we also do so for making them more robust.

Usage

Installation

  • pip install moorkh or
  • git clone https://github.com/akshay-gupta123/moorkh
import moorkh
norm_layer = moorkh.Normalize(mean,std)
model = nn.Sequential(
    norm_layer,
    model
)
model.eval()
attak = moorkh.FGSM(model)
adversarial_images = attack(images, labels)

Implemented Attacks

To-Do's

  • Adding more Attacks
  • Making Documentation
  • Adding demo notebooks
  • Adding Summaries of Implemented papers(for my own undestanding)

Contribution

This library is developed as a part of my learning, if you find any bug feel free to create a PR. All kind of contributions are always welcome!

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

moorkh-0.0.2.tar.gz (6.9 kB view hashes)

Uploaded Source

Built Distribution

moorkh-0.0.2-py3-none-any.whl (9.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page