Python toolbox to create adversarial examples that fool neural networks
Project description
Foolbox
Foolbox is a Python toolbox to create adversarial examples that fool neural networks. It requires Python 3, NumPy and SciPy.
Installation
pip install foolbox
Documentation
Documentation is available on readthedocs: http://foolbox.readthedocs.io/
Example
import foolbox
import keras
from keras.applications.resnet50 import ResNet50, preprocess_input
# instantiate model
keras.backend.set_learning_phase(0)
kmodel = ResNet50(weights='imagenet')
fmodel = foolbox.models.KerasModel(kmodel, bounds=(0, 255), preprocess_fn=preprocess_input)
# get source image and label
image, label = foolbox.utils.imagenet_example()
# apply attack on source image
attack = foolbox.attacks.FGSM(fmodel)
adv_img = attack(image=image, label=label)
Interfaces for a range of other deeplearning packages such as TensorFlow, PyTorch and Lasagne are available, e.g.
model = foolbox.models.PyTorchModel(torchmodel)
Different adversarial criteria such as Top-k, specific target classes or target probability levels can be passed to the attack, e.g.
criterion = foolbox.criteria.TargetClass(22)
attack = foolbox.attacks.FGSM(fmodel, criterion)
Development
Foolbox is a work in progress and any input is welcome.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
foolbox-0.3.5.tar.gz
(199.4 kB
view details)
File details
Details for the file foolbox-0.3.5.tar.gz.
File metadata
- Download URL: foolbox-0.3.5.tar.gz
- Upload date:
- Size: 199.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4aa38254a925add776dc8e7f088fdb6e239b29321d4371c8d4576720c68dd3b6
|
|
| MD5 |
4aad5b9f098b08fc6c000848f9e1db3b
|
|
| BLAKE2b-256 |
8712ef0f661aa9cdf7a009e4bf8970e65f542445b54f7c1b89c878140d160d7f
|