Deploy mlflow models as JSON APIs with minimal new code.
Project description
fastapi mlflow
Deploy mlflow models as JSON APIs using FastAPI with minimal new code.
Installation
pip install fastapi-mlflow
For running the app in production, you will also need an ASGI server, such as Uvicorn or Hypercorn.
Install on Apple Silicon (ARM / M1)
If you experience problems installing on a newer generation Apple silicon based device, this solution from StackOverflow before retrying install has been found to help.
brew install openblas gfortran
export OPENBLAS="$(brew --prefix openblas)"
License
Copyright © 2022-23 Auto Trader Group plc.
Examples
Simple
Create
Create a file main.py
containing:
from fastapi_mlflow.applications import build_app
from mlflow.pyfunc import load_model
model = load_model("/Users/me/path/to/local/model")
app = build_app(model)
Run
Run the server with:
uvicorn main:app
Check
Open your browser at http://127.0.0.1:8000/docs
You should see the automatically generated docs for your model, and be able to test it out using the Try it out
button in the UI.
Serve multiple models
It should be possible to host multiple models (assuming that they have compatible dependencies...) by leveraging FastAPIs Sub Applications:
from fastapi import FastAPI
from fastapi_mlflow.applications import build_app
from mlflow.pyfunc import load_model
app = FastAPI()
model1 = load_model("/Users/me/path/to/local/model1")
model1_app = build_app(model1)
app.mount("/model1", model1_app)
model2 = load_model("/Users/me/path/to/local/model2")
model2_app = build_app(model2)
app.mount("/model2", model2_app)
Custom routing
If you want more control over where and how the prediction end-point is mounted in your API, you can build the predictor function directly and use it as you need:
from inspect import signature
from fastapi import FastAPI
from fastapi_mlflow.predictors import build_predictor
from mlflow.pyfunc import load_model
model = load_model("/Users/me/path/to/local/model")
predictor = build_predictor(model)
app = FastAPI()
app.add_api_route(
"/classify",
predictor,
response_model=signature(predictor).return_annotation,
methods=["POST"],
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file fastapi_mlflow-0.6.3.tar.gz
.
File metadata
- Download URL: fastapi_mlflow-0.6.3.tar.gz
- Upload date:
- Size: 8.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 341cc5f984ab6d02bc082c8a5a010b2fb0ba981acd1672d19b4d10c336d37d5e |
|
MD5 | 499625784f9628b7bd4c142fb73e0d88 |
|
BLAKE2b-256 | ca32f7ff58851bd6d679946a12f138b37116f11b4eaadf23ba501ad736098a12 |
File details
Details for the file fastapi_mlflow-0.6.3-py3-none-any.whl
.
File metadata
- Download URL: fastapi_mlflow-0.6.3-py3-none-any.whl
- Upload date:
- Size: 10.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.11.4
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0b24249ada19de85207ebbc2b17cbe16a7f24f55ba4f89a16dce54eb6fdc743d |
|
MD5 | 85028a1d830fd9fe25f5ebd160aa6727 |
|
BLAKE2b-256 | f040aa161a2928e4bf7323a83054e20fc3e1b18092cfbe1438843b4a82091e8c |