Skip to main content

An open-source platform for machine learning model serving.

Project description

pypi status python versions Downloads build status Documentation Status join BentoML Slack

BentoML

BentoML is an open-source platform for high-performance ML model serving.

What does BentoML do?

  • Turn trained ML model into production API endpoint with a few lines of code
  • Support all major machine learning training frameworks
  • End-to-end model serving solution with DevOps best practices baked-in
  • Micro-batching support, bringing the advantage of batch processing to online serving
  • Model management for teams, providing CLI access and Web UI dashboard
  • Flexible model deployment orchestration supporting Docker, Kubernetes, AWS Lambda, SageMaker, Azure ML and more

👉 Join BentoML Slack to follow the latest development updates and roadmap discussions.


Why BentoML

Getting Machine Learning models into production is hard. Data Scientists are not experts in building production services and DevOps best practices. The trained models produced by a Data Science team are hard to test and hard to deploy. This often leads us to a time consuming and error-prone workflow, where a pickled model or weights file is handed over to a software engineering team.

BentoML is an end-to-end solution for model serving, making it possible for Data Science teams to build production-ready model serving endpoints, with common DevOps best practices and performance optimizations baked in.

Check out Frequently Asked Questions page on how does BentoML compares to Tensorflow-serving, Clipper, AWS SageMaker, MLFlow, etc.

Getting Started

Before starting, make sure Python version is 3.6 or above , and install BentoML with pip:

pip install bentoml

A minimal prediction service in BentoML looks something like this:

# https://github.com/bentoml/BentoML/blob/master/guides/quick-start/iris_classifier.py
from bentoml import env, artifacts, api, BentoService
from bentoml.handlers import DataframeHandler
from bentoml.artifact import SklearnModelArtifact

@env(auto_pip_dependencies=True)
@artifacts([SklearnModelArtifact('model')])
class IrisClassifier(BentoService):

    @api(DataframeHandler)
    def predict(self, df):
        # Optional pre-processing, post-processing code goes here
        return self.artifacts.model.predict(df)

This code defines a prediction service that bundles a scikit-learn model and provides an API. The API here is the entry point for accessing this prediction service, and an API with DataframeHandler will convert HTTP JSON request into pandas.DataFrame object before passing it to the user-defined API function for inferencing.

The following code trains a scikit-learn model and bundles the trained model with an IrisClassifier instance. The IrisClassifier instance is then saved to disk in the BentoML SavedBundle format, which is a versioned file archive that is ready for production models serving deployment.

# https://github.com/bentoml/BentoML/blob/master/guides/quick-start/main.py
from sklearn import svm
from sklearn import datasets

from iris_classifier import IrisClassifier

if __name__ == "__main__":
    # Load training data
    iris = datasets.load_iris()
    X, y = iris.data, iris.target

    # Model Training
    clf = svm.SVC(gamma='scale')
    clf.fit(X, y)

    # Create a iris classifier service instance
    iris_classifier_service = IrisClassifier()

    # Pack the newly trained model artifact
    iris_classifier_service.pack('model', clf)

    # Save the prediction service to disk for model serving
    saved_path = iris_classifier_service.save()

By default, BentoML stores SavedBundle files under the ~/bentoml directory. Users can also customize BentoML to use a different directory or cloud storage like AWS S3. BentoML also comes with a model management component YataiService, which provides advanced model management features including a dashboard web UI:

BentoML YataiService Bento Repository Page

BentoML YataiService Bento Details Page

To start a REST API server with the saved IrisClassifier service, use bentoml serve command:

bentoml serve IrisClassifier:latest

The IrisClassifier model is now served at localhost:5000. Use curl command to send a prediction request:

curl -i \
  --header "Content-Type: application/json" \
  --request POST \
  --data '[[5.1, 3.5, 1.4, 0.2]]' \
  http://localhost:5000/predict

The BentoML API server also provides a web UI for accessing predictions and debugging the server. Visit http://localhost:5000 in the browser and use the Web UI to send prediction request:

BentoML provides a convenient way to containerize the model API server with Docker:

  1. Find the SavedBundle directory with bentoml get command

  2. Run docker build with the SavedBundle directory which contains a generated Dockerfile

  3. Run the generated docker image to start a docker container serving the model

saved_path=$(bentoml get IrisClassifier:latest -q | jq -r ".uri.uri")

docker build -t {docker_username}/iris-classifier $saved_path

docker run -p 5000:5000 -e BENTOML_ENABLE_MICROBATCH=True {docker_username}/iris-classifier

This made it possible to deploy BentoML bundled ML models with platforms such as Kubeflow, Knative, Kubernetes, which provides advanced model deployment features such as auto-scaling, A/B testing, scale-to-zero, canary rollout and multi-armed bandit.

BentoML can also deploy SavedBundle directly to cloud services such as AWS Lambda or AWS SageMaker, with the bentoml CLI command:

> bentoml get IrisClassifier
BENTO_SERVICE                         CREATED_AT        APIS                       ARTIFACTS
IrisClassifier:20200121114004_360ECB  2020-01-21 19:40  predict<DataframeHandler>  model<SklearnModelArtifact>
IrisClassifier:20200120082658_4169CF  2020-01-20 16:27  predict<DataframeHandler>  clf<PickleArtifact>
...

> bentoml lambda deploy test-deploy -b IrisClassifier:20200121114004_360ECB
...

> bentoml deployment list
NAME           NAMESPACE    PLATFORM    BENTO_SERVICE                         STATUS    AGE
test-deploy    dev          aws-lambda  IrisClassifier:20200121114004_360ECB  running   2 days and 11 hours
...

Check out the deployment guides and other deployment options with BentoML here.

Documentation

BentoML full documentation: https://docs.bentoml.org/

Examples

Visit bentoml/gallery repository for more examples and tutorials.

FastAI

Scikit-Learn

PyTorch

Tensorflow Keras

Tensorflow 2.0

XGBoost

LightGBM

H2O

Deployment guides:

Contributing

Have questions or feedback? Post a new github issue or discuss in our Slack channel: join BentoML Slack

Want to help build BentoML? Check out our contributing guide and the development guide.

Releases

BentoML is under active development and is evolving rapidly. Currently it is a Beta release, we may change APIs in future releases.

Read more about the latest features and changes in BentoML from the releases page.

Usage Tracking

BentoML by default collects anonymous usage data using Amplitude. It only collects BentoML library's own actions and parameters, no user or model data will be collected. Here is the code that does it.

This helps BentoML team to understand how the community is using this tool and what to build next. You can easily opt-out of usage tracking by running the following command:

# From terminal:
bentoml config set usage_tracking=false
# From python:
import bentoml
bentoml.config().set('core', 'usage_tracking', 'False')

License

Apache License 2.0

FOSSA Status

Project details


Release history Release notifications | RSS feed

This version

0.7.5

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

BentoML-0.7.5.tar.gz (2.6 MB view details)

Uploaded Source

Built Distribution

BentoML-0.7.5-py3-none-any.whl (3.0 MB view details)

Uploaded Python 3

File details

Details for the file BentoML-0.7.5.tar.gz.

File metadata

  • Download URL: BentoML-0.7.5.tar.gz
  • Upload date:
  • Size: 2.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2.post20191203 requests-toolbelt/0.9.1 tqdm/4.41.0 CPython/3.7.5

File hashes

Hashes for BentoML-0.7.5.tar.gz
Algorithm Hash digest
SHA256 09378cf75a8432f66fbf32971dc669d3f86bffb10ce727fff2d38636666f9009
MD5 4171e6b460aa03ec5895080125d970d5
BLAKE2b-256 5b102ac0c9a8847246362ea93a7ebad5be9c637ae64cd9a8ea76cd1574dfba16

See more details on using hashes here.

File details

Details for the file BentoML-0.7.5-py3-none-any.whl.

File metadata

  • Download URL: BentoML-0.7.5-py3-none-any.whl
  • Upload date:
  • Size: 3.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2.post20191203 requests-toolbelt/0.9.1 tqdm/4.41.0 CPython/3.7.5

File hashes

Hashes for BentoML-0.7.5-py3-none-any.whl
Algorithm Hash digest
SHA256 f22eadb9078e7f3a6a39a43e21b0e7072361c55e531461630e40a1965627b7b6
MD5 c028997a7f2afaf8cb714a7b7245267a
BLAKE2b-256 011fcb7bed35eb3b94ee8eff8394cecae4c7b22622fc847b15a24af55f788790

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page