Skip to main content

Python toolbox to create adversarial examples that fool neural networks

Project description

https://readthedocs.org/projects/foolbox/badge/?version=latest https://travis-ci.org/bethgelab/foolbox.svg?branch=master https://coveralls.io/repos/github/bethgelab/foolbox/badge.svg

Foolbox

Foolbox is a Python toolbox to create adversarial examples that fool neural networks. It requires Python, NumPy and SciPy.

Installation

pip install foolbox

We test using Python 2.7, 3.5 and 3.6. Other Python versions might work as well. We recommend using Python 3!

Documentation

Documentation is available on readthedocs: http://foolbox.readthedocs.io/

Example

import foolbox
import keras
from keras.applications.resnet50 import ResNet50, preprocess_input

# instantiate model
keras.backend.set_learning_phase(0)
kmodel = ResNet50(weights='imagenet')
fmodel = foolbox.models.KerasModel(kmodel, bounds=(0, 255), preprocess_fn=preprocess_input)

# get source image and label
image, label = foolbox.utils.imagenet_example()

# apply attack on source image
attack  = foolbox.attacks.FGSM(fmodel)
adversarial = attack(image, label)

Interfaces for a range of other deeplearning packages such as TensorFlow, PyTorch, Theano, Lasagne and MXNet are available, e.g.

model = foolbox.models.TensorFlowModel(images, logits, bounds=(0, 255))
model = foolbox.models.PyTorchModel(torchmodel, bounds=(0, 255), num_classes=1000)
# etc.

Different adversarial criteria such as Top-k, specific target classes or target probability values for the original class or the target class can be passed to the attack, e.g.

criterion = foolbox.criteria.TargetClass(22)
attack    = foolbox.attacks.FGSM(fmodel, criterion)

Feature requests and bug reports

We welcome feature requests and bug reports. Just create a new issue on GitHub.

Questions

Depending on the nature of your question feel free to post it as an issue on GitHub, or post it as a question on Stack Overflow using the foolbox tag. We will try to monitor that tag but if you don’t get an answer don’t hesitate to contact us.

Contributions welcome

Foolbox is a work in progress and any input is welcome.

In particular, we encourage users of deep learning frameworks for which we do not yet have builtin support, e.g. Caffe, Caffe2 or CNTK, to contribute the necessary wrappers. Don’t hestiate to contact us if we can be of any help.

Moreoever, attack developers are encouraged to share their reference implementation using Foolbox so that it will be available to everyone.

Citation

If you find Foolbox useful for your scientific work, please consider citing it in resulting publications. We will soon publish a technical paper and will provide the citation here.

Authors

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

foolbox-0.7.0.tar.gz (202.4 kB view details)

Uploaded Source

File details

Details for the file foolbox-0.7.0.tar.gz.

File metadata

  • Download URL: foolbox-0.7.0.tar.gz
  • Upload date:
  • Size: 202.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No

File hashes

Hashes for foolbox-0.7.0.tar.gz
Algorithm Hash digest
SHA256 475c736922b4db8c978c7a189add117bcc8caddb37e016d0a41039c4eaebe2ca
MD5 ced9b1ffe663a621294d0c1fb0ba9464
BLAKE2b-256 cbd94b9cbac97a662cb8f574f56847ae85f68f6e6e16366006b5d120c10b9b18

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page