Deploy DL/ ML inference pipelines with minimal extra code.
Project description
fastDeploy
Deploy DL/ ML inference pipelines with minimal extra code.
Installation:
pip install --upgrade fastdeploy
Usage:
# Invoke fastdeploy
fastdeploy --help
# or
python -m fastdeploy --help
# Start prediction "loop" for recipe "echo_json"
fastdeploy --recipe ./echo_json --mode loop
# Start rest apis for recipe "echo_json"
fastdeploy --recipe ./echo_json --mode rest
# Auto genereate dockerfile and build docker image. --base is docker base
fastdeploy --recipe ./recipes/echo_json/ \
--mode build_rest --base python:3.6-slim
# fastdeploy_echo_json built!
# Run docker image
docker run -it -p8080:8080 fastdeploy_echo_json
fastDeploy monitor
- available on localhost:8080 (or --port)
- https://github.com/notAI-tech/fastDeploy/blob/master/recipe.md Writing a recipe for your prediction script
- https://github.com/notAI-tech/fastDeploy/blob/master/inference.md cURL and Python inference commands.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
fastdeploy-2.2.6.tar.gz
(190.7 kB
view hashes)
Built Distribution
fastdeploy-2.2.6-py3-none-any.whl
(156.4 kB
view hashes)
Close
Hashes for fastdeploy-2.2.6-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 84af3fa6e718ba72c8bc0bb514b6156e3d01c13385baa6b38761be9d9a0f1cf5 |
|
MD5 | 80be0e46479b41e4f023509b5219ecd1 |
|
BLAKE2b-256 | 32f9b2b6e56d1c8b9c4f1304ea8a91846e911926d53f63a92b526b816dabb606 |