Machine learning prediction serving
Project description
# ServeIt
[![Build Status](https://travis-ci.org/rtlee9/serveit.svg?branch=master)](https://travis-ci.org/rtlee9/serveit)
[![Codacy Grade Badge](https://api.codacy.com/project/badge/Grade/2af32a3840d5441e815f3956659b091f)](https://www.codacy.com/app/ryantlee9/serveit)
[![Codacy Coverage Badge](https://api.codacy.com/project/badge/Coverage/2af32a3840d5441e815f3956659b091f)](https://www.codacy.com/app/ryantlee9/serveit)
[![PyPI version](https://badge.fury.io/py/ServeIt.svg)](https://badge.fury.io/py/ServeIt)
ServeIt lets you easily serve model predictions and supplementary information from a RESTful API. Current features include:
1. Model inference serving via RESTful API endpoint
1. Extensible library for inference-time data loading, preprocessing, input validation, and postprocessing
1. Supplementary information endpoint creation
1. Automatic JSON serialization of responses
1. Configurable request and response logging (work in progress)
## Installation: Python 2.7 and Python 3.6
Installation is easy with pip: `pip install serveit`
## Usage:
Deploy your model `clf` to an API endpoint with as little as one line of code:
```python
from serveit.server import ModelServer
# initialize server with a model and a method to use for predictions
# then start serving predictions
ModelServer(clf, clf.predict).serve()
```
Your new API is now accepting `POST` requests at `localhost:5000/predictions`! Please see the [examples](examples) directory for additional usage.
### Supported libraries
* Scikit-Learn
* Keras
* PyTorch
### Coming soon:
* TensorFlow
## Building
You can build locally with: `python setup.py`
## License
[MIT](LICENSE.md)
[![Build Status](https://travis-ci.org/rtlee9/serveit.svg?branch=master)](https://travis-ci.org/rtlee9/serveit)
[![Codacy Grade Badge](https://api.codacy.com/project/badge/Grade/2af32a3840d5441e815f3956659b091f)](https://www.codacy.com/app/ryantlee9/serveit)
[![Codacy Coverage Badge](https://api.codacy.com/project/badge/Coverage/2af32a3840d5441e815f3956659b091f)](https://www.codacy.com/app/ryantlee9/serveit)
[![PyPI version](https://badge.fury.io/py/ServeIt.svg)](https://badge.fury.io/py/ServeIt)
ServeIt lets you easily serve model predictions and supplementary information from a RESTful API. Current features include:
1. Model inference serving via RESTful API endpoint
1. Extensible library for inference-time data loading, preprocessing, input validation, and postprocessing
1. Supplementary information endpoint creation
1. Automatic JSON serialization of responses
1. Configurable request and response logging (work in progress)
## Installation: Python 2.7 and Python 3.6
Installation is easy with pip: `pip install serveit`
## Usage:
Deploy your model `clf` to an API endpoint with as little as one line of code:
```python
from serveit.server import ModelServer
# initialize server with a model and a method to use for predictions
# then start serving predictions
ModelServer(clf, clf.predict).serve()
```
Your new API is now accepting `POST` requests at `localhost:5000/predictions`! Please see the [examples](examples) directory for additional usage.
### Supported libraries
* Scikit-Learn
* Keras
* PyTorch
### Coming soon:
* TensorFlow
## Building
You can build locally with: `python setup.py`
## License
[MIT](LICENSE.md)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
ServeIt-0.0.5.tar.gz
(8.2 kB
view hashes)
Built Distribution
Close
Hashes for ServeIt-0.0.5-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e14b26e6539d3e7b870777529caebcad974252402b6b0e96b42c3802a6d2fcab |
|
MD5 | b281bd9701ca5fa83fb0d8720eae96d9 |
|
BLAKE2b-256 | 3d0a5fc06b020c93eb08682c24417af44d4df0c7e6b9262fe2d408dd8f5d5a60 |