A framework for developing a realtime model-inference service.
Project description
ML Serving
Serving ML Models
A framework for developing a realtime model-inference service.
Allows you to set up an inference-endpoint for you ML Model easily.
Docs can found here: https://orlevii.github.io/mlserving/
Motivation
Data Scientists usually struggle with integrating their ML-models to production.
mlserving
is here to make the development of model-servers easy for everyone.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
mlserving-0.1.0.tar.gz
(7.3 kB
view hashes)
Built Distribution
mlserving-0.1.0-py3-none-any.whl
(13.2 kB
view hashes)
Close
Hashes for mlserving-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 92da93e117c77b5bd618c7ad40507f52209ee3de540c9c0cd1f5ac4a3495a894 |
|
MD5 | bbbda0b90624f40c69c45827d9aa927e |
|
BLAKE2b-256 | 487741e53f35466e6e7681287ca4bc001ffed5cf3a917d67f5b10737be9570a8 |