Skip to main content

An open-source framework for backdoor learning and defense in multimodal contexts

Project description

BackdoorMBTI

BackdoorMBTI is an open source project expanding the unimodal backdoor learning to a multimodal context. We hope that BackdoorMBTI can facilitate the analysis and development of backdoor defense methods within a multimodal context.

main feature:

  • poison dataset generateion
  • backdoor model generation
  • attack training
  • defense training
  • backdoor evaluation

The framework: framework

Task Supported

Task Dataset Modality
Object Classification CIFAR10 Image
Object Classification TinyImageNet Image
Traffic Sign Recognition GTSRB Image
Facial Recognition CelebA Image
Sentiment Analysis SST-2 Text
Sentiment Analysis IMDb Text
Topic Classification DBpedia Text
Topic Classification AG’s News Text
Speech Command Recognition SpeechCommands Audio
Music Genre Classification GTZAN Audio
Speaker Identification VoxCeleb1 Audio

Backdoor Attacks Supported

Modality Attack Visible Pattern Add Sample Specific paper
Image AdaptiveBlend Invisible Global Yes No REVISITING THE ASSUMPTION OF LATENT SEPARABILITY FOR BACKDOOR DEFENSES
Image BadNets Visible Local Yes No Badnets: Evaluating backdooring attacks on deep neural networks
Image Blend(under test) InVisible Global Yes Yes A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING
Image Blind(under test) Visible Local Yes Yes Blind Backdoors in Deep Learning Models
Image BPP Invisible Global Yes No Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning
Image DynaTrigger Visible Local Yes Yes Dynamic backdoor attacks against machine learning models
Image EMBTROJAN(under test) Inisible Local Yes No An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Image LC Invisible Global No Yes Label-consistent backdoor attacks
Image Lowfreq Invisible Global Yes Yes Rethinking the Backdoor Attacks’ Triggers: A Frequency Perspective
Image PNoise Invisible Global Yes Yes Use procedural noise to achieve backdoor attack
Image Refool Invisible Global Yes No Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Image SBAT Invisible Global No Yes Stealthy Backdoor Attack with Adversarial Training
Image SIG Invisible Global Yes No A NEW BACKDOOR ATTACK IN CNNS BY TRAINING SET CORRUPTION WITHOUT LABEL POISONING
Image SSBA Invisible Global No Yes Invisible Backdoor Attack with Sample-Specific Triggers
Image trojanNN(under test) Visible Local Yes Yes Trojaning Attack on Neural Network
Image ubw(under test) Invisible Global Yes No Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Image WaNet Invisible Global No Yes WaNet -- Imperceptible Warping-Based Backdoor Attack
Text AddSent Visible Local Yes No A backdoor attack against LSTM-based text classification systems
Text BadNets Visible Local Yes No Badnets: Evaluating backdooring attacks on deep neural networks
Text BITE Invisible Local Yes Yes Textual backdoor attacks with iterative trigger injection
Text LWP Visible Local Yes No Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Text STYLEBKD Visible Global No Yes Mind the Style of Text! Adversarial and Backdoor Attacks Based on Text Style Transfer
Text SYNBKD Invisible Global No Yes Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger
Audio Baasv(under test) - Global Yes No Backdoor Attack against Speaker Verification
Audio Blend - Local Yes No Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Audio DABA - Global Yes No Opportunistic Backdoor Attacks: Exploring Human-imperceptible Vulnerabilities on Speech Recognition Systems
Audio GIS - Global No No Going in style: Audio backdoors through stylistic transformations
Audio UltraSonic - Local Yes No Can You Hear It? Backdoor Attacks via Ultrasonic Triggers

Backdoor Defenses Supported

Defense Modality Input Stage Output Paper
STRIP Audio,Image and text backdoor model, clean dataset post-training clean dataset STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
AC Audio,Image and text backdoor model, clean dataset, poison dataset post-training clean model, clean datasest Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
FT Audio,Image and text backdoor model, clean dataset in-training clean model Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.
FP Audio,Image and text backdoor model, clean dataset post-training clean model Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks.
ABL Audio,Image and text backdoor model, poison dataset in-training clean model Anti-Backdoor Learning: Training Clean Models on Poisoned Data
CLP Audio,Image and text backdoor model post-training clean model Data-free Backdoor Removal based on Channel Lipschitzness
NC Image backdoor model, clean dataset post-training clean model, trigger pattern Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

Installation

To install the virtual environment:

conda create -n bkdmbti python=3.10
conda activate bkdmbti
pip install -r requirements.txt

Quick Start

Download Data

Download the data if it can not be downloaded automatically. Some data download scripts are provided in scripts folder.

Backdoor Attack

Here we provide an example to quickly start with the attack experiments, and reproduce the BadNets backdoor attack results. We use resnet-18 as the default model, and 0.1 as the default poison ratio.

cd scripts
python atk_train.py --data_type image --dataset cifar10  --attack_name badnet --model resnet18 --pratio 0.1 --num_workers 4 --epochs 100 
python atk_train.py --data_type audio --dataset speechcommands --attack_name blend --model audiocnn --pratio 0.1 --num_workers 4 --epochs 100 --add_noise true
python atk_train.py --data_type text --dataset sst2 --attack_name addsent --model bert --pratio 0.1 --num_workers 4 --epochs 100 --mislabel true

Use args --add_noise true and --mislabel true to add perturbations to the data. After the experiment, metrics ACC(Accuracy), ASR(Attack Success Rate) and RA(Robustness Accuracy) are collected in attack phase. To learn more about the attack command, you can run python atk_train.py -h to see more details.

Backdoor Defense

Here we provide a defense example, it depends on the backdoor model generated in the attack phase, so you should run the corresponding attack experiment before defense phase.

cd scripts
python def_train.py --data_type image --dataset cifar10 --attack_name badnet  --pratio 0.1 --defense_name finetune --num_workers 4 --epochs 10 
python def_train.py --data_type audio --dataset speechcommands --attack_name blend  --model audiocnn --pratio 0.1 --defense_name fineprune --num_workers 4 --epochs 1 --add_noise true
python def_train.py --data_type text --dataset sst2 --attack_name addsent --model bert --pratio 0.1 --defense_name strip --num_workers 4 --epochs 1 --mislabel true

To learn more about the attack command, you can run python def_train.py -h to see more details. In defense phase, detection accuracy will be collected if the defense is a detection method, and then the sanitized dataset will be used to retrain the model. ACC, ASR and RA metrics are collected after retraining.

Results

More results can be found in: results.md

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

backdoormbti-0.2.1.tar.gz (5.5 MB view details)

Uploaded Source

Built Distribution

backdoormbti-0.2.1-py3-none-any.whl (11.3 MB view details)

Uploaded Python 3

File details

Details for the file backdoormbti-0.2.1.tar.gz.

File metadata

  • Download URL: backdoormbti-0.2.1.tar.gz
  • Upload date:
  • Size: 5.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for backdoormbti-0.2.1.tar.gz
Algorithm Hash digest
SHA256 7212fbc9e67a82720c81e159b75952350be62a847f031d1983f47880726447b4
MD5 311331d819818c2e1917d82ee917c563
BLAKE2b-256 ab31290c2f955ce595f6d2de2a3dbf3ff0ec210ab1cc6103ada559f4c89e0422

See more details on using hashes here.

File details

Details for the file backdoormbti-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: backdoormbti-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 11.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.15

File hashes

Hashes for backdoormbti-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0a6554a60b2f7b252c1e5580fd4810c2d7e6b9e4f37659934475f455a5c065e8
MD5 37f073f2e4dac1663fb766744bb86baf
BLAKE2b-256 2abaa9360e4d758d82533ffb79c4e238021c10af3e4313495608a94934f904c2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page