Calculate common OOD detection metrics
Project description
OOD Detection Metrics
Functions for computing metrics commonly used in the field of out-of-distribution (OOD) detection.
Installation
With PIP
pip install ood-metrics
With Conda
conda install -c conda-forge ood-metrics
Metrics functions
AUROC
Calculate and return the area under the ROC curve using unthresholded predictions on the data and a binary true label.
from ood_metrics import auroc
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert auroc(scores, labels) == 0.75
AUPR
Calculate and return the area under the Precision Recall curve using unthresholded predictions on the data and a binary true label.
from ood_metrics import aupr
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert aupr(scores, labels) == 0.25
FPR @ 95% TPR
Return the FPR when TPR is at least 95%.
from ood_metrics import fpr_at_95_tpr
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert fpr_at_95_tpr(scores, labels) == 0.25
Detection Error
Return the misclassification probability when TPR is 95%.
from ood_metrics import detection_error
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert detection_error(scores, labels) == 0.05
Calculate all stats
Using predictions and labels, return a dictionary containing all novelty detection performance statistics.
from ood_metrics import calc_metrics
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
assert calc_metrics(scores, labels) == {
'fpr_at_95_tpr': 0.25,
'detection_error': 0.05,
'auroc': 0.75,
'aupr_in': 0.25,
'aupr_out': 0.94375
}
Plotting functions
Plot ROC
Plot an ROC curve based on unthresholded predictions and true binary labels.
from ood_metrics import plot_roc
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
plot_roc(scores, labels)
# Generate Matplotlib AUROC plot
Plot PR
Plot an Precision-Recall curve based on unthresholded predictions and true binary labels.
from ood_metrics import plot_pr
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
plot_pr(scores, labels)
# Generate Matplotlib Precision-Recall plot
Plot Barcode
Plot a visualization showing inliers and outliers sorted by their prediction of novelty.
from ood_metrics import plot_barcode
labels = [0, 0, 0, 1, 0]
scores = [0.1, 0.3, 0.6, 0.9, 1.3]
plot_barcode(scores, labels)
# Shows visualization of sort order of labels occording to the scores.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ood_metrics-1.1.2.tar.gz
.
File metadata
- Download URL: ood_metrics-1.1.2.tar.gz
- Upload date:
- Size: 4.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.10.13 Linux/5.15.0-1041-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5ca0ac4dbb9e2b5dfbc7022000f581eabc46466e0722a487caa221628dca6f5b |
|
MD5 | c9c684e5a73d4024a24f590115f0045e |
|
BLAKE2b-256 | 4165f80a7f4e2213ce0f9ea0dc45f1542492d820b841c626818cbd1b3bd8e1fb |
File details
Details for the file ood_metrics-1.1.2-py3-none-any.whl
.
File metadata
- Download URL: ood_metrics-1.1.2-py3-none-any.whl
- Upload date:
- Size: 6.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.6.1 CPython/3.10.13 Linux/5.15.0-1041-azure
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1247819d025032ee632fbc15340517d2faadcc0333a17ce15953901dfb00e5b0 |
|
MD5 | 5267e176c8cf59db11f1f0a33b3348d5 |
|
BLAKE2b-256 | 1e07ad1c339c1673704446aaec3ff98ea61d0599bdaad4c97346efd227d23355 |