No project description provided
Project description
Introduction
This repo serves as a Model release template repo. This includes everything from onnx conversion up to model release into production with all nessasry configs.
Getting Started
To get started you need to fork this repo, and start going through it and fitting it to your use case as described below.
- Makefile
- Dependency management
- Configs
- Package code
- Onnx conversion
- Trt conversion
- Handlers
- Testing
- Flask app
1. Makefile
The makefile is the interface where developers interact with to perform any task. Below is the description of each command:
-
download-saved-model: Download artifacts stored on mlflow at a certain epoch. Make sure to fill configs/config.yaml
-
download-registered-model: Pull artifacts from model registry. Pass DEST as directory to store in. Make sure to fill configs/config.yaml
-
convert-to-onnx: Run convert to onnx script.
-
convert-trt: build & run container that performs trt conversion and yeild to artifacts/trt_converted. Pass FP(floating point) BS(batch size) DEVICE(gpu device) ONNX_PATH(path to onnx weights)
-
trt-exec: command to be executed from inside the trt container, perform the conversion and copies the model to outside container
-
predict-onnx: predict using onnx weights. Pass DATA_DIR(directory of data to be predicted) ONNX_PATH(path to onnx weights) CONFIG_PATH(Model config path) OUTPUT(output path directory)
-
predict-triton: predict by sending to a hosted triton server. Pass DATA_DIR(directory of data to be predicted) IP(ip of server) PORT(port of triton server) MODEL_NAME(model name on triton) CONFIG_PATH(Model config path) OUTPUT(output path directory)
-
evaluate: evaluate predicted results and write out metrics. Pass MODEL_PREDS(model predictions directory) GT(ground truth directory) OUTPUT(output path)
-
python-unittest: Run python tests defined.
-
bash-unittest: Run defined bash tests.
-
quick-host-onnx: setup triton folder structure by copying nessasarry files, then hosting a triton server container. Pass ONNX_PATH
-
quick-host-trt: setup triton folder structure by copying nessasarry files, then hosting a triton server container. Pass FP(floating point) BS(batch size) DEVICE(gpu device)
-
host-endpoint: preform quick-host-trt and build and start flask container. Pass FP(floating point) BS(batch size) DEVICE(gpu device)
-
setup-flask-app: command to be executed from inside the flask container.
-
push-model: push model to Model registry.
-
build-publish-pypi: build package folder into a pypi package and push the package to registry.
2. Dependency Management
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file lpr_pkg-3.0.4.tar.gz
.
File metadata
- Download URL: lpr_pkg-3.0.4.tar.gz
- Upload date:
- Size: 6.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.8.10 Linux/5.4.0-146-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6794ac2364afb2d15f5a234bc84232c259d0b70b318cc41f2c9270efc2149608 |
|
MD5 | 092bc489033ab4b79b89ca31c7a1da03 |
|
BLAKE2b-256 | 61ff20a6710f0e4336cc3bcd7c94bebed43f7bfdae11e4b9b1f7921f76e4f46e |
File details
Details for the file lpr_pkg-3.0.4-py3-none-any.whl
.
File metadata
- Download URL: lpr_pkg-3.0.4-py3-none-any.whl
- Upload date:
- Size: 6.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.4.2 CPython/3.8.10 Linux/5.4.0-146-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 99366d95bd8f6144799b04ab055318a4e489f7104e253b88cc7a3c2cf16c05c0 |
|
MD5 | 728da8267e1744bbd4e85e2c10a19f97 |
|
BLAKE2b-256 | 8da62f27f1bb7a89910f22d61c0059d3982b680cbb3369507912f3e953eb41ee |