Algorithms for outlier detection, concept drift and metrics.
Project description
alibi-detect is an open source Python library focused on outlier, adversarial and concept drift detection. The package aims to cover both online and offline detectors for tabular data, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.
Table of Contents
Installation and Usage
alibi-detect can be installed from PyPI:
pip install alibi-detect
We will use the VAE outlier detector to illustrate the API.
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.saving import save_detector, load_detector
# initialize and fit detector
od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024)
od.fit(X_train)
# make predictions
preds = od.predict(X_test)
# save and load detectors
filepath = './my_detector/'
save_detector(od, filepath)
od = load_detector(filepath)
The predictions are returned in a dictionary with as keys meta
and data
. meta
contains the detector's metadata while data
is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported.
The save and load functionality for the Prophet time series outlier detector is currently experiencing issues in Python 3.6 but works in Python 3.7.
Supported Algorithms
The following tables show the advised use cases for each algorithm. The column Feature Level indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the algorithm reference list for more information with links to the documentation and original papers as well as examples for each of the detectors.
Outlier Detection
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Isolation Forest | ✔ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ |
Mahalanobis Distance | ✔ | ✘ | ✘ | ✘ | ✔ | ✔ | ✘ |
AE | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
VAE | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✔ |
AEGMM | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ |
VAEGMM | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ |
Likelihood Ratios | ✔ | ✔ | ✔ | ✘ | ✔ | ✘ | ✔ |
Prophet | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✘ |
Spectral Residual | ✘ | ✘ | ✔ | ✘ | ✘ | ✔ | ✔ |
Seq2Seq | ✘ | ✘ | ✔ | ✘ | ✘ | ✘ | ✔ |
Adversarial Detection
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Adversarial AE | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ |
Drift Detection
Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level |
---|---|---|---|---|---|---|---|
Kolmogorov-Smirnov | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ |
Maximum Mean Discrepancy | ✔ | ✔ | ✘ | ✔ | ✔ | ✘ | ✘ |
Reference List
Outlier Detection
-
Isolation Forest (FT Liu et al., 2008)
- Example: Network Intrusion
-
Mahalanobis Distance (Mahalanobis, 1936)
- Example: Network Intrusion
-
- Example: CIFAR10
-
Variational Auto-Encoder (VAE) (Kingma et al., 2013)
- Examples: Network Intrusion, CIFAR10
-
Auto-Encoding Gaussian Mixture Model (AEGMM) (Zong et al., 2018)
- Example: Network Intrusion
-
Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)
- Example: Network Intrusion
-
Likelihood Ratios (Ren et al., 2019)
- Examples: Genome, Fashion-MNIST vs. MNIST
-
Prophet Time Series Outlier Detector (Taylor et al., 2018)
- Example: Weather Forecast
-
Spectral Residual Time Series Outlier Detector (Ren et al., 2019)
- Example: Synthetic Dataset
-
Sequence-to-Sequence (Seq2Seq) Outlier Detector (Sutskever et al., 2014; Park et al., 2017)
- Examples: ECG, Synthetic Dataset
Adversarial Detection
Drift Detection
-
- Example: CIFAR10, movie reviews
-
Maximum Mean Discrepancy (Gretton et al, 2012)
- Example: CIFAR10, movie reviews
Datasets
The package also contains functionality in alibi_detect.datasets
to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a Bunch object with the data, labels and optional metadata are returned. Example:
from alibi_detect.datasets import fetch_ecg
(X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True)
Sequential Data and Time Series
-
Genome Dataset:
fetch_genome
- Bacteria genomics dataset for out-of-distribution detection, released as part of Likelihood Ratios for Out-of-Distribution Detection. From the original TL;DR: The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the README.
from alibi_detect.datasets import fetch_genome (X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True)
-
ECG 5000:
fetch_ecg
- 5000 ECG's, originally obtained from Physionet.
-
NAB:
fetch_nab
- Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using
alibi_detect.datasets.get_list_nab()
.
- Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using
Images
-
CIFAR-10-C:
fetch_cifar10c
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
fetch_cifar10c
allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved withalibi_detect.datasets.corruption_types_cifar10c()
. The dataset can be used in research on robustness and drift. The original data can be found here. Example:
from alibi_detect.datasets import fetch_cifar10c corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate'] X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
-
Adversarial CIFAR-10:
fetch_attack
- Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: Carlini-Wagner ('cw') and SLIDE ('slide'). Example:
from alibi_detect.datasets import fetch_attack (X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True)
Tabular
- KDD Cup '99:
fetch_kdd
- Dataset with different types of computer network intrusions.
fetch_kdd
allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found here.
- Dataset with different types of computer network intrusions.
Models
Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under alibi_detect.models
. Main implementations:
-
PixelCNN++:
alibi_detect.models.pixelcnn.PixelCNN
-
Variational Autoencoder:
alibi_detect.models.autoencoder.VAE
-
Sequence-to-sequence model:
alibi_detect.models.autoencoder.Seq2Seq
-
ResNet:
alibi_detect.models.resnet
- Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our Google Cloud Bucket and can be fetched as follows:
from alibi_detect.utils.fetching import fetch_tf_model model = fetch_tf_model('cifar10', 'resnet32')
Integrations
The integrations folder contains various wrapper tools to allow the alibi-detect algorithms to be used in production machine learning systems with examples on how to deploy outlier and adversarial detectors with KFServing.
Dependencies
creme
dask[array]
matplotlib
numpy
pandas
opencv-python
Pillow
scipy
scikit-image
scikit-learn
tensorflow>=2.0.0
tensorflow_probability>=0.8
transformers>=2.10.0
Extra dependencies for OutlierProphet
(install via pip install alibi-detect[prophet]
):
fbprophet>=0.5,<0.7
holidays==0.9.11
Citations
If you use alibi-detect in your research, please consider citing it.
BibTeX entry:
@software{alibi-detect,
title = {{Alibi-Detect}: Algorithms for outlier and adversarial instance detection, concept drift and metrics.},
author = {Van Looveren, Arnaud and Vacanti, Giovanni and Klaise, Janis and Coca, Alexandru},
url = {https://github.com/SeldonIO/alibi-detect},
version = {0.4.3},
date = {2020-10-08},
year = {2019}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file alibi-detect-0.4.3.tar.gz
.
File metadata
- Download URL: alibi-detect-0.4.3.tar.gz
- Upload date:
- Size: 86.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.1 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd0e1df6c89c8e65353d3b366a620dbc6934a7a44210b761535c6240c7d26361 |
|
MD5 | 75ff22242070b60aed4658ea8e908bba |
|
BLAKE2b-256 | d77dbc0b25e5794fc0ae32cd949c17f62f5b75dca4c9cdeb82fcc428ed48b32d |
File details
Details for the file alibi_detect-0.4.3-py3-none-any.whl
.
File metadata
- Download URL: alibi_detect-0.4.3-py3-none-any.whl
- Upload date:
- Size: 111.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/50.3.0.post20201006 requests-toolbelt/0.9.1 tqdm/4.50.1 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fa2ad7642a9b22bbc583257abf9d548defb5b988a3bc1cba71977ab06e93d727 |
|
MD5 | d64741b776b1f203f61f88a1f84b8543 |
|
BLAKE2b-256 | ec720362ad7b3f99c8f3393293d4774a1fe9c0822f82ee5be26df4868522a166 |