Deploy DL/ ML inference pipelines with minimal extra code.
Project description
fastDeploy
easy and performant micro-services for Python Deep Learning inference pipelines
- Deploy any python inference pipeline with minimal extra code
- Auto batching of concurrent inputs is enabled out of the box
- no changes to inference code (unlike tf-serving etc), entire pipeline is run as is
- Promethues metrics (open metrics) are exposed for monitoring
- Auto generates clean dockerfiles and kubernetes health check, scaling friendly APIs
- sequentially chained inference pipelines are supported out of the box
- can be queried from any language via easy to use rest apis
- easy to understand (simple consumer producer arch) and simple code base
Installation:
pip install --upgrade fastdeploy fdclient
# fdclient is optional, only needed if you want to use python client
CLI explained
Start fastDeploy server on a recipe:
# Invoke fastdeploy
python -m fastdeploy --help
# or
fastdeploy --help
# Start prediction "loop" for recipe "echo"
fastdeploy --loop --recipe recipes/echo
# Start rest apis for recipe "echo"
fastdeploy --rest --recipe recipes/echo
Send a request and get predictions:
auto generate dockerfile and build docker image:
# Write the dockerfile for recipe "echo"
# and builds the docker image if docker is installed
# base defaults to python:3.8-slim
fastdeploy --build --recipe recipes/echo
# Run docker image
docker run -it -p8080:8080 fastdeploy_echo
Serving your model (recipe):
Where to use fastDeploy?
- to deploy any non ultra light weight models i.e: most DL models, >50ms inference time per example
- if the model/pipeline benefits from batch inference, fastDeploy is perfect for your use-case
- if you are going to have individual inputs (example, user's search input which needs to be vectorized or image to be classified)
- in the case of individual inputs, requests coming in at close intervals will be batched together and sent to the model as a batch
- perfect for creating internal micro services separating your model, pre and post processing from business logic
- since prediction loop and inference endpoints are separated and are connected via sqlite backed queue, can be scaled independently
Where not to use fastDeploy?
- non cpu/gpu heavy models that are better of running parallely rather than in batch
- if your predictor calls some external API or uploads to s3 etc in a blocking way
- io heavy non batching use cases (eg: query ES or db for each input)
- for these cases better to directly do from rest api code (instead of consumer producer mechanism) so that high concurrency can be achieved
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
fastdeploy-3.0.16.tar.gz
(15.6 kB
view details)
Built Distribution
File details
Details for the file fastdeploy-3.0.16.tar.gz
.
File metadata
- Download URL: fastdeploy-3.0.16.tar.gz
- Upload date:
- Size: 15.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0850aaa8bf8240cc6a54587dca5ae27cc0d6916c8a17330f7ce3a26c5f99ebdd |
|
MD5 | 0c66ac6fb9c7a0106234764af66a385d |
|
BLAKE2b-256 | aea7380d1c7c166bd3a0e07696724ee5e95c49aa54b4b10d59c25899a33dcd61 |
File details
Details for the file fastdeploy-3.0.16-py3-none-any.whl
.
File metadata
- Download URL: fastdeploy-3.0.16-py3-none-any.whl
- Upload date:
- Size: 15.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/5.1.1 CPython/3.12.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ac03c7024cb2ffc3ac32111635c62060d933c261a1dd9071277c7366f4f774a5 |
|
MD5 | 8343f8f2bd1a6a593e93bf79f2fe9516 |
|
BLAKE2b-256 | c8714a7a8a363c8f6fd1fe00f96360bcc463ed29a3c8ae86530176f0b52f6355 |