'A framework for developing a realtime model-inference service.'
Project description
ML Serving
mlserving is a framework for developing a realtime model-inference service.
Allows you to easily set-up an inference-endpoint for your ML Model.
mlserving emphasizes on high performance and allows easy integration with other model servers such as TensorFlow Serving
Docs can found here: https://mlserving.readthedocs.io/en/latest/
Motivation
Data Scientists usually struggle with integrating their ML-models to production.
mlserving is here to make the development of model-servers easy for everyone.
Installation
$ pip install mlserving
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
mlserving-0.2.0.tar.gz
(8.7 kB
view hashes)
Built Distribution
mlserving-0.2.0-py3-none-any.whl
(14.8 kB
view hashes)
Close
Hashes for mlserving-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 985876ade06dec4c06327c8802196775551c252733f47700d57f4709fb70af4c |
|
MD5 | 56fe20d2a1242fce15bc0dda53ed20b4 |
|
BLAKE2b-256 | 45d3125aa3b663c4b49e62c41240a816e3616fb40a45d98a7a049b072ef82dc8 |