'A framework for developing a realtime model-inference service.'
Project description
# ML Serving [![PyPI version](https://badge.fury.io/py/mlserving.svg)](https://badge.fury.io/py/mlserving) [![Build](https://github.com/orlevii/mlserving/workflows/build/badge.svg)]()
mlserving is a framework for developing a realtime model-inference service.
Allows you to easily set-up an inference-endpoint for your ML Model.
mlserving emphasizes on high performance and allows easy integration with other model servers such as TensorFlow Serving
Docs can found here: https://mlserving.readthedocs.io/en/latest/
## Motivation Data Scientists usually struggle with integrating their ML-models to production.
mlserving is here to make the development of model-servers easy for everyone.
## Installation `bash $ pip install mlserving `
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mlserving_utf_8-0.2.0-py2-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9f20f83ab8b2fe18f4a5f125673ace7c76707351a56331692bfcaa530a3fdb70 |
|
MD5 | b33ad84c7192da9630c731f65848bc1d |
|
BLAKE2b-256 | 46e459b137ab33781720dfad04375bf203e3d66d609f6829309c446de485e00b |