Skip to main content

Algorithms for outlier detection, concept drift and metrics.

Project description

Alibi Detect Logo

Build Status Documentation Status Python version PyPI version GitHub Licence Slack channel

Alibi Detect is an open source Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. The outlier detection methods should allow the user to identify global, contextual and collective outliers.

For more background on the importance of monitoring outliers and distributions in a production setting, check out this talk from the Challenges in Deploying and Monitoring Machine Learning Systems ICML 2020 workshop, based on the paper Monitoring and explainability of models in production and referencing Alibi Detect.

Table of Contents

Installation and Usage

alibi-detect can be installed from PyPI:

pip install alibi-detect

We will use the VAE outlier detector to illustrate the API.

from alibi_detect.od import OutlierVAE
from alibi_detect.utils.saving import save_detector, load_detector

# initialize and fit detector
od = OutlierVAE(threshold=0.1, encoder_net=encoder_net, decoder_net=decoder_net, latent_dim=1024)
od.fit(X_train)

# make predictions
preds = od.predict(X_test)

# save and load detectors
filepath = './my_detector/'
save_detector(od, filepath)
od = load_detector(filepath)

The predictions are returned in a dictionary with as keys meta and data. meta contains the detector's metadata while data is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported.

The save and load functionality for the Prophet time series outlier detector is currently experiencing issues in Python 3.6 but works in Python 3.7.

Supported Algorithms

The following tables show the advised use cases for each algorithm. The column Feature Level indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the algorithm reference list for more information with links to the documentation and original papers as well as examples for each of the detectors.

Outlier Detection

Detector Tabular Image Time Series Text Categorical Features Online Feature Level
Isolation Forest
Mahalanobis Distance
AE
VAE
AEGMM
VAEGMM
Likelihood Ratios
Prophet
Spectral Residual
Seq2Seq

Adversarial Detection

Detector Tabular Image Time Series Text Categorical Features Online Feature Level
Adversarial AE
Model distillation

Drift Detection

Detector Tabular Image Time Series Text Categorical Features Online Feature Level
Kolmogorov-Smirnov
Maximum Mean Discrepancy

Reference List

Outlier Detection

Adversarial Detection

Drift Detection

Datasets

The package also contains functionality in alibi_detect.datasets to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a Bunch object with the data, labels and optional metadata are returned. Example:

from alibi_detect.datasets import fetch_ecg

(X_train, y_train), (X_test, y_test) = fetch_ecg(return_X_y=True)

Sequential Data and Time Series

  • Genome Dataset: fetch_genome

    • Bacteria genomics dataset for out-of-distribution detection, released as part of Likelihood Ratios for Out-of-Distribution Detection. From the original TL;DR: The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the README.
    from alibi_detect.datasets import fetch_genome
    
    (X_train, y_train), (X_val, y_val), (X_test, y_test) = fetch_genome(return_X_y=True)
    
  • ECG 5000: fetch_ecg

    • 5000 ECG's, originally obtained from Physionet.
  • NAB: fetch_nab

    • Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using alibi_detect.datasets.get_list_nab().

Images

  • CIFAR-10-C: fetch_cifar10c

    • CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10. fetch_cifar10c allows you to pick any severity level or corruption type. The list with available corruption types can be retrieved with alibi_detect.datasets.corruption_types_cifar10c(). The dataset can be used in research on robustness and drift. The original data can be found here. Example:
    from alibi_detect.datasets import fetch_cifar10c
    
    corruption = ['gaussian_noise', 'motion_blur', 'brightness', 'pixelate']
    X, y = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)
    
  • Adversarial CIFAR-10: fetch_attack

    • Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: Carlini-Wagner ('cw') and SLIDE ('slide'). Example:
    from alibi_detect.datasets import fetch_attack
    
    (X_train, y_train), (X_test, y_test) = fetch_attack('cifar10', 'resnet56', 'cw', return_X_y=True)
    

Tabular

  • KDD Cup '99: fetch_kdd
    • Dataset with different types of computer network intrusions. fetch_kdd allows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found here.

Models

Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under alibi_detect.models. Main implementations:

  • PixelCNN++: alibi_detect.models.pixelcnn.PixelCNN

  • Variational Autoencoder: alibi_detect.models.autoencoder.VAE

  • Sequence-to-sequence model: alibi_detect.models.autoencoder.Seq2Seq

  • ResNet: alibi_detect.models.resnet

    • Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our Google Cloud Bucket and can be fetched as follows:
    from alibi_detect.utils.fetching import fetch_tf_model
    
    model = fetch_tf_model('cifar10', 'resnet32')
    

Integrations

The integrations folder contains various wrapper tools to allow the alibi-detect algorithms to be used in production machine learning systems with examples on how to deploy outlier and adversarial detectors with KFServing.

Dependencies

creme
dask[array]
matplotlib
numpy
pandas
opencv-python
Pillow
scipy
scikit-image
scikit-learn
tensorflow>=2.0.0
tensorflow_probability>=0.8
transformers>=2.10.0

Extra dependencies for OutlierProphet (install via pip install alibi-detect[prophet]):

fbprophet>=0.5,<0.7
holidays==0.9.11

Citations

If you use alibi-detect in your research, please consider citing it.

BibTeX entry:

@software{alibi-detect,
  title = {{Alibi-Detect}: Algorithms for outlier and adversarial instance detection, concept drift and metrics.},
  author = {Van Looveren, Arnaud and Vacanti, Giovanni and Klaise, Janis and Coca, Alexandru},
  url = {https://github.com/SeldonIO/alibi-detect},
  version = {0.4.4},
  date = {2020-10-08},
  year = {2019}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alibi-detect-0.4.4.tar.gz (3.8 MB view hashes)

Uploaded Source

Built Distribution

alibi_detect-0.4.4-py3-none-any.whl (148.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page