Skip to main content

Deploy DL/ ML inference pipelines with minimal extra code.

Project description

fastDeploy

easy and performant micro-services for Python Deep Learning inference pipelines

  • Deploy any python inference pipeline with minimal extra code
  • Auto batching of concurrent inputs is enabled out of the box
  • no changes to inference code (unlike tf-serving etc), entire pipeline is run as is
  • Promethues metrics (open metrics) are exposed for monitoring
  • Auto generates clean dockerfiles and kubernetes health check, scaling friendly APIs
  • sequentially chained inference pipelines are supported out of the box
  • can be queried from any language via easy to use rest apis
  • easy to understand (simple consumer producer arch) and simple code base

Installation:

pip install --upgrade fastdeploy fdclient
# fdclient is optional, only needed if you want to use python client

CLI explained

Start fastDeploy server on a recipe:

# Invoke fastdeploy 
python -m fastdeploy --help
# or
fastdeploy --help

# Start prediction "loop" for recipe "echo"
fastdeploy --loop --recipe recipes/echo

# Start rest apis for recipe "echo"
fastdeploy --rest --recipe recipes/echo

Send a request and get predictions:

auto generate dockerfile and build docker image:

# Write the dockerfile for recipe "echo"
# and builds the docker image if docker is installed
# base defaults to python:3.8-slim
fastdeploy --build --recipe recipes/echo

# Run docker image
docker run -it -p8080:8080 fastdeploy_echo

Serving your model (recipe):

Where to use fastDeploy?

  • to deploy any non ultra light weight models i.e: most DL models, >50ms inference time per example
  • if the model/pipeline benefits from batch inference, fastDeploy is perfect for your use-case
  • if you are going to have individual inputs (example, user's search input which needs to be vectorized or image to be classified)
  • in the case of individual inputs, requests coming in at close intervals will be batched together and sent to the model as a batch
  • perfect for creating internal micro services separating your model, pre and post processing from business logic
  • since prediction loop and inference endpoints are separated and are connected via sqlite backed queue, can be scaled independently

Where not to use fastDeploy?

  • non cpu/gpu heavy models that are better of running parallely rather than in batch
  • if your predictor calls some external API or uploads to s3 etc in a blocking way
  • io heavy non batching use cases (eg: query ES or db for each input)
  • for these cases better to directly do from rest api code (instead of consumer producer mechanism) so that high concurrency can be achieved

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fastdeploy-3.0.20.tar.gz (16.8 kB view details)

Uploaded Source

Built Distribution

fastdeploy-3.0.20-py3-none-any.whl (16.7 kB view details)

Uploaded Python 3

File details

Details for the file fastdeploy-3.0.20.tar.gz.

File metadata

  • Download URL: fastdeploy-3.0.20.tar.gz
  • Upload date:
  • Size: 16.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for fastdeploy-3.0.20.tar.gz
Algorithm Hash digest
SHA256 8d1c6a6f16d20064d4ff085bb2d92145698c09440940a99b4b719c0fd66d26fb
MD5 9edcc0f0c5bfe4fc188c8c12db5a8b83
BLAKE2b-256 5c67dc4fa6deac1516ee0b5db4f2953529edc5c868644a5fcd8d5e8e0e9af67f

See more details on using hashes here.

File details

Details for the file fastdeploy-3.0.20-py3-none-any.whl.

File metadata

  • Download URL: fastdeploy-3.0.20-py3-none-any.whl
  • Upload date:
  • Size: 16.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for fastdeploy-3.0.20-py3-none-any.whl
Algorithm Hash digest
SHA256 e7f3cb1952f3ef59f9cbafe488a497cf707afba4c376aafcc139b4e87d1b1866
MD5 0e602f924c52018aca0cf7b31185585e
BLAKE2b-256 8fc45f60bec11f02e97d867c50c54fcaaed33f4163b4c302bda38420e9788599

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page