Skip to main content

A python framework for serving and operating machine learning models

Project description

pypi status python versions Downloads build status Documentation Status join BentoML Slack

From a model in jupyter notebook to production API service in 5 minutes

BentoML

Getting Started | Documentation | Examples | Contributing | Releases | License | Blog

BentoML is a platform for serving and deploying machine learning models, making it easy to productionize trained models.

BentoML framework provides:

  • BentoService: High-level APIs for defining an ML service and packaging its trained model artifacts, preprocessing source code, dependencies, and configurations into a standard file format "Bento" that can be deployed as containerize REST API server, PyPI package, CLI tool, or batch/streaming inference job.

  • Yatai: A stateful server that provides Web UI and APIs for accesing model registry on top of cloud storage and manages model serving deployments on cloud platforms such as AWS, Azure and GCP.

Check out the 5-mins quick start notebook using BentoML to productionize a scikit-learn model and deploy it to AWS Lambda: Google Colab Badge


Getting Started

Installation with pip:

pip install bentoml

Defining a machine learning service with BentoML:

import bentoml
from bentoml.artifact import SklearnModelArtifact
from bentoml.handlers import DataframeHandler

# You can also import your own Python module here and BentoML will automatically
# figure out the dependency chain and package all those Python modules
import my_preproceesing_lib

@bentoml.artifacts([SklearnModelArtifact('model')])
@bentoml.env(pip_dependencies=["scikit-learn"])
class IrisClassifier(bentoml.BentoService):

    @bentoml.api(DataframeHandler)
    def predict(self, df):
        # Preprocessing prediction request - DataframeHandler parses REST API
        # request or CLI args into pandas Dataframe that can be easily processed
        # into feature vectors that are ready for the trained model
        df = my_preproceesing_lib.process(df)

        # Assess to serialized trained model artifact via self.artifacts
        return self.artifacts.model.predict(df)

After training your ML model, you can pack it with the prediction service IrisClassifier defined above, and save them as a Bento to file system:

from sklearn import svm
from sklearn import datasets

clf = svm.SVC(gamma='scale')
iris = datasets.load_iris()
X, y = iris.data, iris.target
clf.fit(X, y)

# Packaging trained model for serving in production:
iris_classifier_service = IrisClassifier.pack(model=clf)

# Save prediction service to file archive
saved_path = = iris_classifier_service.save()

A Bento is a versioned archive, containing the BentoService you defined, along with trained model artifacts, dependencies and configurations etc. BentoML library can then load in a Bento file and turn it into a high performance prediction service.

For example, you can now start a REST API server based off the saved Bento files:

bentoml serve {saved_path}

Visit http://127.0.0.1:5000 in your browser to play around with the Web UI of the REST API model server, sending testing requests from the UI, or try sending prediction request with curl from CLI:

curl -i \
  --header "Content-Type: application/json" \
  --request POST \
  --data '[[5.1, 3.5, 1.4, 0.2]]' \
  http://localhost:5000/predict

The saved archive can also be used directly from CLI:

bentoml predict {saved_path} --input='[[5.1, 3.5, 1.4, 0.2]]'

# alternatively:
bentoml predict {saved_path} --input='./iris_test_data.csv'

Saved Bento can also be installed and used as a Python PyPI package:

pip install {saved_path}
# Your bentoML model class name will become packaged name
import IrisClassifier

installed_svc = IrisClassifier.load()
installed_svc.predict([[5.1, 3.5, 1.4, 0.2]])

You can also build a docker image for this API server with all dependencies and environments configured automatically by BentoML, and share the docker image with your DevOps team for deployment in production:

docker build -t my_api_server {saved_path}

Try out the full getting started notebook here on Google Colab.

Examples

FastAI

Scikit-Learn

PyTorch

Tensorflow Keras

XGBoost

H2O

Visit bentoml/gallery repository for more example projects demonstrating how to use BentoML:

Deployment guides:

Feature Highlights

  • Multiple Distribution Format - Easily package your Machine Learning models and preprocessing code into a format that works best with your inference scenario:

    • Docker Image - deploy as containers running REST API Server
    • PyPI Package - integrate into your python applications seamlessly
    • CLI tool - put your model into Airflow DAG or CI/CD pipeline
    • Spark UDF - run batch serving on a large dataset with Spark
    • Serverless Function - host your model on serverless platforms such as AWS Lambda
  • Multiple Framework Support - BentoML supports a wide range of ML frameworks out-of-the-box including Tensorflow, PyTorch, Keras, Scikit-Learn, xgboost, H2O, FastAI and can be easily extended to work with new or custom frameworks.

  • Deploy Anywhere - BentoML bundled ML service can be easily deployed with platforms such as Docker, Kubernetes, Serverless, Airflow and Clipper, on cloud platforms including AWS, Google Cloud, and Azure.

  • Custom Runtime Backend - Easily integrate your python pre-processing code with high-performance deep learning runtime backend, such as tensorflow-serving.

Documentation

Full documentation and API references can be found at bentoml.readthedocs.io

Usage Tracking

BentoML library by default reports basic usages using Amplitude. It helps BentoML authors to understand how people are using this tool and improve it over time. You can easily opt-out by running the following command from terminal:

bentoml config set usage_tracking=false

Or from your python code:

import bentoml
bentoml.config.set('core', 'usage_tracking', 'false')

We also collect example notebook page views to help us understand the community interests. To opt-out of tracking, delete the ![Impression](http... line in the first markdown cell of our example notebooks.

Contributing

Have questions or feedback? Post a new github issue or join our Slack chat room: join BentoML Slack

Want to help build BentoML? Check out our contributing guide and the development guide.

To make sure you have a pleasant experience, please read the code of conduct. It outlines core values and beliefs and will make working together a happier experience.

Happy hacking!

Releases

BentoML is under active development and is evolving rapidly. Currently it is a Beta release, we may change APIs in future releases.

Read more about the latest features and changes in BentoML from the releases page. and follow the BentoML Community Calendar.

Watch BentoML Github repo for future releases:

gh-watch

License

Apache License 2.0

FOSSA Status

Project details


Release history Release notifications | RSS feed

This version

0.4.3

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

BentoML-0.4.3.tar.gz (435.4 kB view details)

Uploaded Source

Built Distribution

BentoML-0.4.3-py3-none-any.whl (483.5 kB view details)

Uploaded Python 3

File details

Details for the file BentoML-0.4.3.tar.gz.

File metadata

  • Download URL: BentoML-0.4.3.tar.gz
  • Upload date:
  • Size: 435.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.7.3

File hashes

Hashes for BentoML-0.4.3.tar.gz
Algorithm Hash digest
SHA256 dab5c411d8a7146d174ee5cf35a036556d07413869f85136f51564a56fb6c1b5
MD5 a9bbeec7dd813e16776bd8e09faf21de
BLAKE2b-256 0f81c06bd8be5bb2dc4b8677c24946a77d83bf26cce7ab1ac0747148cb1d9ae3

See more details on using hashes here.

File details

Details for the file BentoML-0.4.3-py3-none-any.whl.

File metadata

  • Download URL: BentoML-0.4.3-py3-none-any.whl
  • Upload date:
  • Size: 483.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.8.0 tqdm/4.29.1 CPython/3.7.3

File hashes

Hashes for BentoML-0.4.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a9c78483296ce9d0546163a20b613a0cd15c4723beba5ac6246c79a66ae993ed
MD5 6f92e2e6a7c5ddfa7a9934440aeb2967
BLAKE2b-256 8ef071d2eee49330783e9d7bc1dd7a1a8d183a5f250309099df1d11bb7a898e7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page