Skip to main content

An open-source platform for machine learning model serving.

Project description

pypi status python versions Downloads build status Documentation Status join BentoML Slack

BentoML

BentoML is an open-source platform for high-performance ML model serving.

What does BentoML do?

  • Turn your ML model into production API endpoint with just a few lines of code
  • Support all major machine learning training frameworks
  • End-to-end model serving solution with DevOps best practices baked in
  • Model server with Adaptive micro-batching support, bringing the advantage of batch processing to online serving
  • Model management for teams, providing CLI access and Web UI dashboard
  • Flexible model deployment orchestration with support for Docker, Kubernetes, KFserving, AWS Lambda, SageMaker, Azure and more

👉 Join BentoML Slack community to hear about the latest development updates.


Why BentoML

Getting Machine Learning models into production is hard. Data Scientists are not experts in building production services and DevOps best practices. The trained models produced by a Data Science team are hard to test and hard to deploy. This often leads us to a time consuming and error-prone workflow, where a pickled model or weights file is handed over to a software engineering team.

BentoML is an end-to-end solution for model serving, making it possible for Data Science teams to build production-ready model serving endpoints, with common DevOps best practices and performance optimizations baked in.

👉 Check out Frequently Asked Questions

Getting Started

Installing BentoML with pip:

pip install bentoml

A minimal prediction service in BentoML looks something like this:

from bentoml import env, artifacts, api, BentoService
from bentoml.handlers import DataframeHandler
from bentoml.artifact import SklearnModelArtifact

@env(auto_pip_dependencies=True)
@artifacts([SklearnModelArtifact('model')])
class IrisClassifier(BentoService):

    @api(DataframeHandler)
    def predict(self, df):
        return self.artifacts.model.predict(df)

This code defines a prediction service that requires a scikit-learn model, and asks BentoML to figure out the required PyPI pip packages automatically. It also defined an API, which is the entry point for accessing this prediction service. And the API is expecting a pandas.DataFrame object as its input data.

Now you are ready to train a model and serve the model with the IrisClassifier service defined above. Save the above code to a new file iris_classifier.py and run the following code:

from sklearn import svm
from sklearn import datasets

from iris_classifier import IrisClassifier

if __name__ == "__main__":
    # Load training data
    iris = datasets.load_iris()
    X, y = iris.data, iris.target

    # Model Training
    clf = svm.SVC(gamma='scale')
    clf.fit(X, y)

    # Create a iris classifier service instance
    iris_classifier_service = IrisClassifier()

    # Pack the newly trained model artifact
    iris_classifier_service.pack('model', clf)

    # Save the prediction service to disk for model serving
    saved_path = iris_classifier_service.save()

You've just created a BentoService SavedBundle, it's a versioned file archive that is ready for production deployment. It contains the BentoService class you defined, all its python code dependencies and PyPI dependencies, and the trained scikit-learn model. By default, BentoML saves those files and related metadata under ~/bentoml directory, but this is easily customizable to a different directory or cloud storage like Amazon S3.

You can now start a REST API server by specifying the BentoService's name and version, or provide the file path to the saved bundle:

bentoml serve IrisClassifier:latest

Alternatively, in bash command line, you can get the absolute path to the saved BentoService from the JSON output of bentoml get command:

saved_path=$(bentoml get IrisClassifier:latest -q | jq -r ".uri.uri")
bentoml serve $saved_path

The REST API server provides web UI for testing and debugging the server. If you are running this command on your local machine, visit http://127.0.0.1:5000 in your browser and try out sending API request to the server. You can also send prediction request with curl from command line:

curl -i \
  --header "Content-Type: application/json" \
  --request POST \
  --data '[[5.1, 3.5, 1.4, 0.2]]' \
  http://localhost:5000/predict

The BentoService SavedBundle directory is structured to work as a docker build context, that can be used to build a API server docker container image:

docker build -t my-org/iris-classifier:v1 $saved_path

docker run -p 5000:5000 -e BENTOML_ENABLE_MICROBATCH=True my-org/iris-classifier:v1

You can also deploy your BentoService directly to cloud services such as AWS Lambda with bentoml CLI's deployment management commands:

> bentoml get IrisClassifier
BENTO_SERVICE                         CREATED_AT        APIS                       ARTIFACTS
IrisClassifier:20200121114004_360ECB  2020-01-21 19:40  predict<DataframeHandler>  model<SklearnModelArtifact>
IrisClassifier:20200120082658_4169CF  2020-01-20 16:27  predict<DataframeHandler>  clf<PickleArtifact>
...

> bentoml lambda deploy test-deploy -b IrisClassifier:20200121114004_360ECB
...

> bentoml deployment list
NAME           NAMESPACE    PLATFORM    BENTO_SERVICE                         STATUS    AGE
test-deploy    dev          aws-lambda  IrisClassifier:20200121114004_360ECB  running   2 days and 11 hours
...

Documentation

BentoML full documentation can be found here: https://docs.bentoml.org/

Examples

Visit bentoml/gallery repository for more examples and tutorials.

FastAI

Scikit-Learn

PyTorch

Tensorflow Keras

Tensorflow 2.0

XGBoost

LightGBM

H2O

Deployment guides:

Contributing

Have questions or feedback? Post a new github issue or discuss in our Slack channel: join BentoML Slack

Want to help build BentoML? Check out our contributing guide and the development guide.

Releases

BentoML is under active development and is evolving rapidly. Currently it is a Beta release, we may change APIs in future releases.

Read more about the latest features and changes in BentoML from the releases page.

Usage Tracking

BentoML by default collects anonymous usage data using Amplitude. It only collects BentoML library's own actions and parameters, no user or model data will be collected. Here is the code that does it.

This helps BentoML team to understand how the community is using this tool and what to build next. You can easily opt-out of usage tracking by running the following command:

# From terminal:
bentoml config set usage_tracking=false
# From python:
import bentoml
bentoml.config().set('core', 'usage_tracking', 'False')

License

Apache License 2.0

FOSSA Status

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

BentoML-0.7.4.tar.gz (2.6 MB view details)

Uploaded Source

Built Distribution

BentoML-0.7.4-py3-none-any.whl (3.0 MB view details)

Uploaded Python 3

File details

Details for the file BentoML-0.7.4.tar.gz.

File metadata

  • Download URL: BentoML-0.7.4.tar.gz
  • Upload date:
  • Size: 2.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2.post20191203 requests-toolbelt/0.9.1 tqdm/4.41.0 CPython/3.7.5

File hashes

Hashes for BentoML-0.7.4.tar.gz
Algorithm Hash digest
SHA256 323147a74ccb81897a81a599ec4e305b452a24c9df996597c6f3d2c1db22b76e
MD5 ceae74a10b227bcd132527582efefd10
BLAKE2b-256 67ef55f4145091bf8caa6edd375e30ce43c561f7889a96b0bd7c54eae5d28106

See more details on using hashes here.

File details

Details for the file BentoML-0.7.4-py3-none-any.whl.

File metadata

  • Download URL: BentoML-0.7.4-py3-none-any.whl
  • Upload date:
  • Size: 3.0 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.2.post20191203 requests-toolbelt/0.9.1 tqdm/4.41.0 CPython/3.7.5

File hashes

Hashes for BentoML-0.7.4-py3-none-any.whl
Algorithm Hash digest
SHA256 191c3760ba5ae0ebc9f948c71d6f3d3d7525197ee7d9733319a62a350c07d3b7
MD5 290a1cfa090258ea47fce780cd19c392
BLAKE2b-256 1147a0b0e90901a50e9a061c86e18922a91a33117169ed4b11d0f9b24b782c9d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page