Deploy DL/ ML inference pipelines with minimal extra code.
Project description
fastDeploy
Deploy DL/ ML inference pipelines with minimal extra code.
Installation:
pip install --upgrade fastdeploy
Usage:
# Invoke fastdeploy
fastdeploy --help
# or
python -m fastdeploy --help
# Start prediction "loop" for recipe "deepsegment"
fastdeploy --recipe ./deepsegment --mode loop
# Start rest apis for recipe "deepsegment"
fastdeploy --recipe ./deepsegment --mode rest
# Run json prediction using curl
curl -d '{"data": ["I was hungry i ordered a pizza"]}'\
-H "Content-Type: application/json" -X POST http://localhost:8080/infer
# Run file prediction using curl
curl -F image_1=@image_1.png image_2=@image_2.png http://localhost:8080/infer
# Run file prediction using python
python -c 'import requests; print(requests.post("http://localhost:8080/infer",\
json={"data": ["I was hungry i ordered a pizza"]}).json())'
# Run prediction using python
python -c 'import requests; print(requests.post("http://localhost:8080/infer",\
json={"data": ["I was hungry i ordered a pizza"]}).json())'
# Response
{'prediction': [['I was hungry', 'i ordered a pizza']], 'success': True}
# Auto genereate dockerfile and build docker image. --base is docker base
fastdeploy --recipe ./recipes/deepsegment/ \
--mode build_rest --base tensorflow/tensorflow:1.14.0-py3
# fastdeploy_deepsegment built!
# Run docker image
docker run -it -p8080:8080 fastdeploy_deepsegment
Features:
- Minimal extra code: No model exporting/ conversion/ freezing required. fastDeploy is the easiest way to serve and/or dockerize your existing inference code with minimal work.
- Fully configurable dynamic batching: fastDeploy dynamically batches concurrent requests for optimal resource usage.
- Containerization with no extra code: fastDeploy auto generates optimal dockerfiles and builds the image with no extra code.
- One consumer, multiple producers: (Coming soon) Single fastDeploy loop (consumer) can simultaneously be connected to multiple (types of) producers (rest, websocket, file).
- One producer, multiple consumers: Distribute one producer's work load to multiple consumers running on multiple nodes (assuming common storage is available for queues)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
fastdeploy-1.0rc36.tar.gz
(12.4 kB
view hashes)
Built Distribution
Close
Hashes for fastdeploy-1.0rc36-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2a4d72ce82bda0b9e2234d3b59683d4317b1e26fa6d37566a47c264c5f756eb1 |
|
MD5 | a6e1f3a3becd1e132c62afe729f157bd |
|
BLAKE2b-256 | 31d83ed389c3a8f17f3acfba1deccaef2f54ea3126223a44df764f8bd75a50cd |