Skip to main content

A PyTorch model zoo consisting of robust (adversarially trained) image classifiers and otherwise equivalent normal models for research purposes.

Project description

Robust Models are less Over-Confident

Julia Grabinski, Paul Gavrikov, Janis Keuper, Margret Keuper CC BY-SA 4.0

Presented at: Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS)

Paper | ArXiv | HQ Poster | Talk

Abstract: We empirically show that adversarially robust models are less over-confident then their non-robust counterparts. Abstract: Regardless of the success of convolutional neural networks (CNNs) in many academic benchmarks of computer vision tasks, their application in real-world is still facing fundamental challenges, like the inherent lack of robustness as unveiled by adversarial attacks. These attacks target to manipulate the network's prediction by adding a small amount of noise onto the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks by including adversarial samples in the trainingset. However, a general analysis of the reliability and model calibration of these robust models beyond adversarial robustness is still pending. In this paper, we analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (activation functions and pooling) have a strong influence on the models' confidence.

Model Zoo

Citation

Would you like to reference our evaluation?
Then consider citing our paper:

@inproceedings{grabinski2022robust,
  title     = {Robust Models are less Over-Confident},
  author    = {Grabinski, Julia and Gavrikov, Paul and Keuper, Janis and Keuper, Margret},
  booktitle = {Advances in Neural Information Processing Systems},
  publisher = {Curran Associates, Inc.},
  year      = {2022},
  volume    = {35},
  url       = {}
}

Legal

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Please note that robust and normal imagenet models are not provided by us are licensed differently.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

robustvsnormalzoo-1.0.0.tar.gz (4.3 kB view details)

Uploaded Source

File details

Details for the file robustvsnormalzoo-1.0.0.tar.gz.

File metadata

  • Download URL: robustvsnormalzoo-1.0.0.tar.gz
  • Upload date:
  • Size: 4.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.8.10

File hashes

Hashes for robustvsnormalzoo-1.0.0.tar.gz
Algorithm Hash digest
SHA256 4fe6360e31e7b0f985e2ced5f4bfe3a6627d19086f46c84d11c2a0ea118ff2f5
MD5 c583cfa2dbeb992188d6569d964b825e
BLAKE2b-256 d455b71cd9b80a690b63de00e8de390931273f017d5eb5a4c6568f628900d28a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page