A Library for Out-of-Distribution Detection with PyTorch
Project description
Out-of-Distribution (OOD) Detection with Deep Neural Networks based on PyTorch.
The library provides:
Out-of-Distribution Detection Methods
Loss Functions
Datasets
Neural Network Architectures as well as pretrained weights
Useful Utilities
and is designed such that it should be compatible with frameworks like, like pytorch-lightning and pytorch-segmentation-models. The library also covers some methods from closely related fields such as Open-Set Recognition, Novelty Detection, Confidence Estimation and Anomaly Detection.
📚 Documentation
The documentation is available here.
NOTE: An important convention adopted in pytorch-ood is that OOD detectors predict outlier scores that should be larger for outliers than for inliers. If you notice that the scores predicted by a detector do not match the formulas in the corresponding publication, it may be possible that we multiplied the scores with negative one to comply with this convention.
⏳ Quick Start
Load model pre-trained on CIFAR-10 with the Energy-Bounded Learning Loss [6], and predict on some dataset data_loader using Energy-based Out-of-Distribution Detection [6], calculating the common OOD detection metrics:
from pytorch_ood.model import WideResNet
from pytorch_ood.detector import EnergyBased
from pytorch_ood.utils import OODMetrics
# Create Neural Network
model = WideResNet(pretrained="er-cifar10-tune").eval().cuda()
# Create detector
detector = EnergyBased(model)
# Evaluate
metrics = OODMetrics()
for x, y in data_loader:
metrics.update(detector(x.cuda()), y)
print(metrics.compute())
You can find more examples in the documentation.
🛠 ️️Installation
The package can be installed via PyPI:
pip install pytorch-ood
Dependencies
torch
torchvision
scipy
torchmetrics
Optional Dependencies
📦 Implemented
Detectors:
Detector |
Description |
Year |
Ref |
---|---|---|---|
OpenMax |
Implementation of the OpenMax Layer as proposed in the paper Towards Open Set Deep Networks. |
2016 |
|
Monte Carlo Dropout |
Implements Monte Carlo Dropout. |
2016 |
|
Maximum Softmax Probability |
Implements the Softmax Baseline for OOD and Error detection. |
2017 |
|
ODIN |
ODIN is a preprocessing method for inputs that aims to increase the discriminability of the softmax outputs for In- and Out-of-Distribution data. |
2018 |
|
Mahalanobis |
Implements the Mahalanobis Method. |
2018 |
|
Energy-Based OOD Detection |
Implements the Energy Score of Energy-based Out-of-distribution Detection. |
2020 |
|
Entropy |
Uses entropy to detect OOD inputs. |
2021 |
|
Maximum Logit |
Implements the MaxLogit method. |
2022 |
|
KL-Matching |
Implements the KL-Matching method for Multi-Class classification. |
2022 |
|
ViM |
Implements Virtual Logit Matching. |
2022 |
Objective Functions:
Objective Function |
Description |
Year |
Ref |
---|---|---|---|
Objectosphere |
Implementation of the paper Reducing Network Agnostophobia. |
2016 |
|
Center Loss |
Generalized version of the Center Loss from the Paper A Discriminative Feature Learning Approach for Deep Face Recognition. |
2016 |
|
Outlier Exposure |
Implementation of the paper Deep Anomaly Detection With Outlier Exposure. |
2018 |
|
Deep SVDD |
Implementation of the Deep Support Vector Data Description from the paper Deep One-Class Classification. |
2018 |
|
Energy Regularization |
Adds a regularization term to the cross-entropy that aims to increase the energy gap between IN and OOD samples. |
2020 |
|
CAC Loss |
Class Anchor Clustering Loss from Class Anchor Clustering: a Distance-based Loss for Training Open Set Classifiers |
2021 |
|
Entropy Maximization |
Entropy maximization and meta classification for OOD in semantic segmentation |
2021 |
|
II Loss |
Implementation of II Loss function from Learning a neural network-based representation for open set recognition. |
2022 |
|
MCHAD Loss |
Implementation of the MCHAD Loss friom the paper Multi Class Hypersphere Anomaly Detection. |
2022 |
Image Datasets:
Dataset |
Description |
Year |
Ref |
---|---|---|---|
TinyImages |
The TinyImages dataset is often used as auxiliary OOD training data. However, use is discouraged. |
2012 |
|
Textures |
Textures dataset, also known as DTD, often used as OOD Examples. |
2013 |
|
FoolingImages |
OOD Images Generated to fool certain Deep Neural Networks. |
2014 |
|
TinyImages300k |
A cleaned version of the TinyImages Dataset with 300.000 images, often used as auxiliary OOD training data. |
2018 |
|
MNIST-C |
Corrupted version of the MNIST. |
2019 |
|
CIFAR10-C |
Corrupted version of the CIFAR 10. |
2019 |
|
CIFAR100-C |
Corrupted version of the CIFAR 100. |
2019 |
|
ImageNet-C |
Corrupted version of the ImageNet. |
2019 |
|
ImageNet - A, O, R |
Different Outlier Variants for the ImageNet. |
2019 |
|
MVTech-AD |
MVTech Anomaly Segmentation Dataset |
2021 |
|
StreetHazards |
Anomaly Segmentation Dataset |
2022 |
|
PixMix |
PixMix image augmentation method |
2022 |
Text Datasets:
Dataset |
Description |
Year |
Ref |
---|---|---|---|
Multi30k |
Multi-30k dataset, as used by Hendrycks et al. in the OOD baseline paper. |
2016 |
|
WikiText2 |
Texts from the wikipedia often used as auxiliary OOD training data. |
2016 |
|
WikiText103 |
Texts from the wikipedia often used as auxiliary OOD training data. |
2016 |
|
NewsGroup20 |
Textx from different newsgroups, as used by Hendrycks et al. in the OOD baseline paper. |
🤝 Contributing
We encourage everyone to contribute to this project by adding implementations of OOD Detection methods, datasets etc, or check the existing implementations for bugs.
📝 Citing
pytorch-ood was presented at a CVPR Workshop in 2022. If you use it in a scientific publication, please consider citing:
@InProceedings{kirchheim2022pytorch, author = {Kirchheim, Konstantin and Filax, Marco and Ortmeier, Frank}, title = {PyTorch-OOD: A Library for Out-of-Distribution Detection Based on PyTorch}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {4351-4360} }
🛡️ ️License
The code is licensed under Apache 2.0. We have taken care to make sure any third party code included or adapted has compatible (permissive) licenses such as MIT, BSD, etc. The legal implications of using pre-trained models in commercial services are, to our knowledge, not fully understood.
🔗 References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file pytorch_ood-0.1.0-py3-none-any.whl
.
File metadata
- Download URL: pytorch_ood-0.1.0-py3-none-any.whl
- Upload date:
- Size: 101.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a8b845f5dea6937eb465ae4be88b6f98fa14e1ffbedbb584fe2b1bcaddd40ee0 |
|
MD5 | 497b515c33dd7c4d89a909e9cbb26842 |
|
BLAKE2b-256 | 3a3ed74a63199ffaa11f25f466d38fbc8d0382e39f594b918bd0828e3702d774 |