Skip to main content

Fast model predictions for ML Endpoints

Project description

License Build Status PyPI Package Downloads Python Versions

pure-predict speeds up and slims down machine learning prediction applications. It is a foundational tool for serverless inference or small batch prediction with popular machine learning frameworks like scikit-learn and fasttext. It implements the predict methods of these frameworks in pure Python.

Primary Use Cases

The primary use case for pure-predict is the following scenario:

  1. A model is trained in an environment without strong container footprint constraints. Perhaps a long running “offline” job on one or many machines where installing a number of python packages from PyPI is not at all problematic.

  2. At prediction time the model needs to be served behind an API. Typical access patterns are to request a prediction for one “record” (one “row” in a numpy array or one string of text to classify) per request or a mini-batch of records per request.

  3. Preferred infrastructure for the prediction service is either serverless (AWS Lambda) or a container service where the memory footprint of the container is constrained.

  4. The fitted model object’s artifacts needed for prediction (coefficients, weights, vocabulary, decision tree artifacts, etc.) are relatively small (10s to 100s of MBs).

In this scenario, a container service with a large dependency footprint can be overkill for a microservice, particularly if the access patterns favor the pricing model of a serverless application. Additionally, for smaller models and single record predictions per request, the numpy and scipy functionality in the prediction methods of popular machine learning frameworks work against the application in terms of latency, underperforming pure python in some cases.

Check out the blog post for more information on the motivation and use cases of pure-predict.

Package Details

It is a Python package for machine learning prediction distributed under the Apache 2.0 software license. It contains multiple subpackages which mirror their open source counterpart (scikit-learn, fasttext, etc.). Each subpackage has utilities to convert a fitted machine learning model into a custom object containing prediction methods that mirror their native counterparts, but converted to pure python. Additionally, all relevant model artifacts needed for prediction are converted to pure python.

A pure-predict model object can then be pickled and later unpickled without any 3rd party dependencies other than pure-predict.

This eliminates the need to have large dependency packages installed in order to make predictions with fitted machine learning models using popular open source packages for training models. These dependencies (numpy, scipy, scikit-learn, fasttext, etc.) are large in size and not always necessary to make fast and accurate predictions. Additionally, they rely on C extensions that may not be ideal for serverless applications with a python runtime.

Quick Start Example

In a python enviornment with scikit-learn and its dependencies installed:

import pickle

from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_iris
from pure_sklearn.map import convert_estimator

# fit sklearn estimator
X, y = load_iris(return_X_y=True)
clf = RandomForestClassifier()
clf.fit(X, y)

# convert to pure python estimator
clf_pure_predict = convert_estimator(clf)
with open("model.pkl", "wb") as f:
    pickle.dump(clf_pure_predict, f)

# make prediction with sklearn estimator
y_pred = clf.predict([[0.25, 2.0, 8.3, 1.0]])
print(y_pred)
[2]

In a python enviornment with only pure-predict installed:

import pickle

# load pickled model
with open("model.pkl", "rb") as f:
    clf = pickle.load(f)

# make prediction with pure-predict object
y_pred = clf.predict([[0.25, 2.0, 8.3, 1.0]])
print(y_pred)
[2]

Subpackages

pure_sklearn

Prediction in pure python for a subset of scikit-learn estimators and transformers.

  • estimators
    • linear models - supports the majority of linear models for classification

    • trees - decision trees, random forests, gradient boosting and xgboost

    • naive bayes - a number of popular naive bayes classifiers

    • svm - linear SVC

  • transformers
    • preprocessing - normalization and onehot/ordinal encoders

    • impute - simple imputation

    • feature extraction - text (tfidf, count vectorizer, hashing vectorizer) and dictionary vectorization

    • pipeline - pipelines and feature unions

Sparse data - supports a custom pure python sparse data object - sparse data is handled as would be expected by the relevent transformers and estimators

pure_fasttext

Prediction in pure python for fasttext.

  • supervised - predicts labels for supervised models; no support for quantized models (blocked by this issue)

  • unsupervised - lookup of word or sentence embeddings given input text

Installation

Dependencies

pure-predict requires:

Dependency Notes

  • pure_sklearn has been tested with scikit-learn versions >= 0.20 – certain functionality may work with lower versions but are not guaranteed. Some functionality is explicitly not supported for certain scikit-learn versions and exceptions will be raised as appropriate.

  • xgboost requires version >= 0.82 for support with pure_sklearn.

  • pure-predict is not supported with Python 2.

  • fasttext versions <= 0.9.1 have been tested.

User Installation

The easiest way to install pure-predict is with pip:

pip install --upgrade pure-predict

You can also download the source code:

git clone https://github.com/Ibotta/pure-predict.git

Testing

With pytest installed, you can run tests locally:

pytest pure-predict

Examples

The package contains examples on how to use pure-predict in practice.

Calls for Contributors

Contributing to pure-predict is welcomed by any contributors. Specific calls for contribution are as follows:

  1. Examples, tests and documentation – particularly more detailed examples with performance testing of various estimators under various constraints.

  2. Adding more pure_sklearn estimators. The scikit-learn package is extensive and only partially covered by pure_sklearn. Regression tasks in particular missing from pure_sklearn. Clustering, dimensionality reduction, nearest neighbors, feature selection, non-linear SVM, and more are also omitted and would be good candidates for extending pure_sklearn.

  3. General efficiency. There is likely low hanging fruit for improving the efficiency of the numpy and scipy functionality that has been ported to pure-predict.

  4. Threading could be considered to improve performance – particularly for making predictions with multiple records.

  5. A public AWS lambda layer containing pure-predict.

Background

The project was started at Ibotta Inc. on the machine learning team and open sourced in 2020. It is currently maintained by the machine learning team at Ibotta.

Acknowledgements

Thanks to David Mitchell and Andrew Tilley for internal review before open source. Thanks to James Foley for logo artwork.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scikit-endpoint-0.0.3.tar.gz (44.6 kB view details)

Uploaded Source

Built Distribution

scikit_endpoint-0.0.3-py3-none-any.whl (67.3 kB view details)

Uploaded Python 3

File details

Details for the file scikit-endpoint-0.0.3.tar.gz.

File metadata

  • Download URL: scikit-endpoint-0.0.3.tar.gz
  • Upload date:
  • Size: 44.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for scikit-endpoint-0.0.3.tar.gz
Algorithm Hash digest
SHA256 97cb90191f7ab53efa0cbe9b9683d74e3008c343b644214d8747df11d1b5249e
MD5 8a05ac29cc9d5445a791d15ddf67381b
BLAKE2b-256 56783786ad0116297ebe1d9fba692f7f02d302f0b67456c2f5f4f7a7dac2e7c8

See more details on using hashes here.

File details

Details for the file scikit_endpoint-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for scikit_endpoint-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 660d4f7a0fe04452922d4c6c18afc1289466bf0384f13fcabbaad2ce7660c6f2
MD5 80822a75e52e868d6f26d8fa9ab1c248
BLAKE2b-256 a629cb47894773f8d7a2f8dc5be25a41ed498181cb5bf9798d0fb95b81133de7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page