'A framework for developing a realtime model-inference service.'
mlserving is a framework for developing a realtime model-inference service.
Allows you to easily set-up an inference-endpoint for your ML Model.
mlserving emphasizes on high performance and allows easy integration with other model servers such as TensorFlow Serving
Docs can found here: https://mlserving.readthedocs.io/en/latest/
Data Scientists usually struggle with integrating their ML-models to production.
mlserving is here to make the development of model-servers easy for everyone.
$ pip install mlserving
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size mlserving-0.2.0-py3-none-any.whl (14.8 kB)||File type Wheel||Python version py3||Upload date||Hashes View|
|Filename, size mlserving-0.2.0.tar.gz (8.7 kB)||File type Source||Python version None||Upload date||Hashes View|
Hashes for mlserving-0.2.0-py3-none-any.whl