Skip to main content

minimum code implementation for our USENIX paper `On the Security Risks of AutoML`.

Project description

This project is a minimized runnable project cut from trojanzoo, which contains more datasets, models, attacks and defenses. This repo will not be maintained.

This is a minimum code implementation of our USENIX'22 paper On the Security Risks of AutoML.

python>=3.9 License

pypi docker

Abstract

The artifact discovers the vulnerability gap between manual models and automl models against various kinds of attacks (adversarial, poison, backdoor, extraction and membership) in image classification domain. It implements all datasets, models, and attacks used in our paper.
We expect the artifact could support the paper's claim that automl models are more vulnerable than manual models against various kinds of attacks, which could be explained by their small gradient variance.

Checklist

  • Binary: on pypi with any platform.
  • Model: Our pretrained models are available on Google Drive (link). Follow the model path style {model_dir}/image/{dataset}/{model}.pth to place them in correct location.
  • Data set: CIFAR10, CIFAR100 and ImageNet32.
    Use --download flag to download them automatically at first running.
    ImageNet32 requires manual set-up at their website due to legality.
  • Run-time environment:
    At any platform (Windows and Ubuntu tested).
    Pytorch and torchvision required. (CUDA 11.3 recommended)
    adversarial-robustness-toolbox required for extraction attack and membership attack.
  • Hardware: GPU with CUDA support is recommended.
  • Execution: Model training and backdoor attack would be time-consuming. It would cost more than half day on a Nvidia Quodro RTX6000.
  • Metrics: Model accuracy, attack success rate, clean accuracy drop and cross entropy.
  • Output: console output and saved model files (.pth).
  • Experiments: OS scripts. Recommend to run scripts 3-5 times to reduce the randomness of experiments.
  • How much disk space is required (approximately):
    less than 5GB.
  • How much time is needed to prepare workflow (approximately): within 1 hour.
  • How much time is needed to complete experiments (approximately): 3-4 days.
  • Publicly available: on GitHub.
  • Code licenses: GPL-3.
  • Archived: GitHub commit ff315234561602203615d11166f8f346b4f29dd4.

Description

How to access

Hardware Dependencies

Recommend to use GPU with CUDA 11.3 and CUDNN 8.0.
Less than 5GB disk space is needed.

Software Dependencies

You need to install python==3.9, pytorch==1.10.x, torchvision==0.11.x manually.

ART (IBM) is required for extraction attack and membership attack.
pip install adversarial-robustness-toolbox

Data set

We use CIFAR10, CIFAR100 and ImageNet32 datasets.
Use --download flag to download them automatically at first running.
ImageNet32 requires manual set-up at their website due to legality.

Models

Our pretrained models are available on Google Drive (link). Follow the model path style {model_dir}/image/{dataset}/{model}.pth to place them in correct location.

Installation

(optional) Config Path

You can set the config files to customize data storage location and many other default settings. View /configs_example as an example config setting.
We support 3 configs (priority ascend):

  • package:
    (DO NOT MODIFY)
    autovul/base/configs/*.yml
    autovul/vision/configs/*.yml
  • user:
    ~/.autovul/configs/base/*.yml
    ~/.autovul/configs/vision/*.yml
  • workspace:
    ./configs/base/*.yml
    ./configs/vision/*.yml

Experiment Workflow

Bash Files

Check the bash files under /bash to reproduce our paper results.

Train Models

You need to first run /bash/train.sh to get pretrained models.
If you run it for the first time, please run with --download flag to download the dataset:
bash ./bash/train.sh "--download"

It takes a relatively long time to train all models, here we provide our pretrained models on Google Drive (link). Follow the model path style {model_dir}/image/{dataset}/{model}.pth to place them in correct location.

Run Attacks

/bash/adv_attack.sh
/bash/poison.sh
/bash/backdoor.sh
/bash/extraction.sh
/bash/membership.sh

Run Other Exps

/bash/grad_var.sh
/bash/mitigation_backdoor.sh
/bash/mitigation_extraction.sh

For mitigation experiments, the architecture names in our paper map to:

  • darts-i: diy_deep
  • darts-ii: diy_no_skip
  • darts-iii: diy_deep_noskip

These are the 3 options for --model_arch {arch} (with --model darts)

Evaluation and Expected Result

Our paper claims that automl models are more vulnerable than manual models against various kinds of attacks, which could be explained by low gradient variance.

Training

Most models around 96%-97% accuracy on CIFAR10.

Attack

For automl models on CIFAR10,

  • adversarial
    higher success rate around 10% (±4%).
  • poison
    lower accuracy drop around 5% (±2%).
  • backdoor
    higher success rate around 2% (±1%) lower accuracy drop around 1% (±1%).
  • extraction
    lower inference cross entropy around 0.3 (±0.1%).
  • membership
    higher auc around 0.04 (±0.01%).

Others

  • gradient variance
    automl with lower gradient variance (around 2.2).
  • mitigation architecture
    deep architectures (darts-i, darts-iii) have larger cross entropy for extraction attack (around 0.5), and higher accuracy drop for poisoning attack (around 7%).

Experiment Customization

Use -h or --help flag for example python files to check available arguments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autovul-1.0.2.tar.gz (199.2 kB view details)

Uploaded Source

Built Distribution

autovul-1.0.2-py3-none-any.whl (16.7 kB view details)

Uploaded Python 3

File details

Details for the file autovul-1.0.2.tar.gz.

File metadata

  • Download URL: autovul-1.0.2.tar.gz
  • Upload date:
  • Size: 199.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for autovul-1.0.2.tar.gz
Algorithm Hash digest
SHA256 8f7319c62df6b08672a86958285a5528abc7840a93d2b1f05d1e7bdffcf3262b
MD5 b7d5952011e400c32502fd6100735f48
BLAKE2b-256 5e2825124a36e7c740960d715be3b85374579a67d2bed242ee6028402a02c1db

See more details on using hashes here.

File details

Details for the file autovul-1.0.2-py3-none-any.whl.

File metadata

  • Download URL: autovul-1.0.2-py3-none-any.whl
  • Upload date:
  • Size: 16.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.9.7

File hashes

Hashes for autovul-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d03d84de3b96dd2f23c37999fd29e2cbbcafbb345a3728588ae0e63f26fe7aff
MD5 1765d40ac31efd5cca2c1d2ee48f6993
BLAKE2b-256 b2723fa3efc801be528f48c097389d05966b274fe87b225a09cec4297e00e9d3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page