RESTful service for hosting machine learning models.
Project description
REST Model Service
rest-model-service is a package for building RESTful services for hosting machine learning models.
This package helps you to quickly build RESTful services for your ML model by handling many low level details, like:
- Documentation, using pydantic and OpenAPI
- Logging configuration
- Status Check Endpoints
- Metrics
This package also allows you to extend the functionality of your deployed models by following the Decorator Pattern.
Installation
The package can be installed from pypi:
pip install rest_model_service
Usage
To use the service you must first have a working model class that uses the MLModel base class from the ml_base package. The MLModel base class is designed to provide a consistent interface around model prediction logic that allows the rest_model_service package to deploy any model that implements it. Some examples of how to create MLModel classes for your model can be found here.
You can then set up a configuration file that points at the model class of the model you want to host. The configuration file should look like this:
service_title: "REST Model Service"
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
The "class_path" should contain the full path to the class, including the package names, module name, and class name separated by periods. The "create_endpoint" option is there for cases when you might want to load a model but not create an endpoint for it, if it is set to "false" the model will be loaded and available for use within the service but will not have an endpoint defined for it. A reference to the model object will be available from the ModelManager singleton.
The config file should be YAML, be named "rest_config.yaml", and be in the current working directory. However, we can point at configuration files that have different names and are in different locations if needed.
The service can host many models, all that is needed is to add entries to the "models" array.
Configuration options can also be passed to the models hosted by the service. To do this, add a configuration key to the model entry in the "models" array:
service_title: "REST Model Service"
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
configuration:
parameter1: true
parameter2: string_value
parameter3: 123
The key-value pairs are passed directly into the model class' __init__()
method at instantiation time as keyword
arguments. The model can then use the parameters to configure itself.
Adding Service Information
We can add several details to the configuration file that are useful when building OpenAPI specifications.
service_title: "REST Model Service"
description: "Service description"
version: "1.1.0"
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
The service title, description, and version are passed into the application and used to build the OpenAPI specification. Details for how to build the OpenAPI document for your model service are below.
Adding a Decorator to a Model
The rest_model_service package also supports the decorator pattern. Decorators are defined in the ml_base package and explained here. A decorator can be added to a model by adding the "decorators" key to the model's configuration:
service_title: REST Model Service With Decorators
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
decorators:
- class_path: tests.mocks.PredictionIDDecorator
The PredictionIDDecorator will be instantiated and added to the IrisModel instance when the service starts up.
Keyword arguments can also be provided to the decorator's __init__()
by adding a "configuration" key to the
decorator's entry like this:
service_title: REST Model Service With Decorators
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
decorators:
- class_path: tests.mocks.PredictionIDDecorator
configuration:
parameter1: "asdf"
parameter2: "zxcv"
The configuration dictionary will be passed to the decorator class as keyword arguments.
Many decorators can be added to a single model, in which case each decorator will decorate the decorator that was previously attached to the model. This will create a "stack" of decorators that will each handle the prediction request before the model's prediction is created.
Adding Logging
The service also optionally accepts logging configuration through the YAML configuration file:
service_title: REST Model Service With Logging
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
logging:
version: 1
disable_existing_loggers: true
formatters:
formatter:
class: logging.Formatter
format: "%(asctime)s %(pathname)s %(lineno)s %(levelname)s %(message)s"
handlers:
stdout:
level: INFO
class: logging.StreamHandler
stream: ext://sys.stdout
formatter: formatter
loggers:
root:
level: INFO
handlers:
- stdout
propagate: false
The YAML needs to be formatted so that it deserializes to a dictionary that matches the logging package's configuration dictionary schema.
Adding Metrics
This package allows you to create an endpoint that exposes metrics to a Prometheus server. The metrics endpoint is disabled by default and must be enabled in the configuration file.
Using this aspect of the service requires installing the "metrics" optional dependencies:
pip install rest_model_service[metrics]
To enable the metrics collection, simply set the "enabled" attribute in the "metrics" attribute to "true" in the YAML configuration file:
service_title: "REST Model Service"
description: "Service description"
version: "1.1.0"
metrics:
enabled: true
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
The default metrics are:
- http_requests_total: A counter that counts the number of requests to the service.
- http_request_size_bytes: A summary that counts the size of the requests to the service.
- http_response_size_bytes: A summary that counts the size of the responses from the service.
- http_request_duration_seconds: A histogram that counts the duration of the requests to the service. Only a few buckets to keep cardinality low.
- http_request_duration_highr_seconds: A histogram that counts the duration of the requests to the service. Large number of buckets (>20).
The configuration allows more complex options to be passed to the Prometheus client library. To do this, add keys to the metrics configuration:
service_title: "REST Model Service"
description: "Service description"
version: "1.1.0"
metrics:
enabled: true
should_group_status_codes: true
should_ignore_untemplated: false
should_group_untemplated: true
should_round_latency_decimals: false
should_respect_env_var: false
should_instrument_requests_inprogress: false
excluded_handlers: []
body_handlers: []
round_latency_decimals: 4
env_var_name: "ENABLE_METRICS"
inprogress_name: "http_requests_inprogress"
inprogress_labels: false
models:
- class_path: tests.mocks.IrisModel
create_endpoint: true
The options are passed directly into the Prometheus instrumentor library, the options are explained in that library's documentation.
Creating an OpenAPI Contract
An OpenAPI contract can be generated dynamically for your models as hosted within the REST model service. To create the contract and save it execute this command:
generate_openapi
The command looks for a "rest_config.yaml" in the current working directory and creates the application from it. The command then saves the resulting OpenAPI document to a file named "openapi.yaml" in the current working directory.
You can provide a path to the configuration file like this:
generate_openapi --configuration_file=examples/rest_config.yaml
You can also provide the desired path for the OpenAPI document that will be created like this:
generate_openapi --output_file=example.yaml
Both options together:
generate_openapi --configuration_file=examples/rest_config.yaml --output_file=example.yaml
An example rest_config.yaml file is provided in the examples of the project. It points at a MLModel class in the tests package.
Using Status Check Endpoints
The service supports three status check endpoints:
- "/api/health", indicates whether the service process is running. This endpoint will return a 200 status once the service has started.
- "/api/health/ready", indicates whether the service is ready to respond to requests. This endpoint will return a 200 status only if all the models and decorators have finished being instantiated without errors. Once the models and decorators are loaded, the readiness check will always return a ACCEPTING_TRAFFIC state.
- "/api/health/startup", indicates whether the service is started. This endpoint will return a 200 status only if all the models and decorators have finished being instantiated without errors.
Running the Service
To start the service in development mode, execute this command:
uvicorn rest_model_service.main:app --reload
The service should be able to find your configuration file, but if you did not place it in the current working directory you can point the service to the right path like this:
export REST_CONFIG='examples/rest_config.yaml'
uvicorn rest_model_service.main:app --reload
Common Errors
If you get an error that says something about not being able to find a module or a class, you might need to update your PYTHONPATH environment variable:
export PYTHONPATH=./
The service relies on being able to find the model classes and the decorator classes in the python environment to load them and instantiate them. If your Python interpreter is not able to find the classes, then the service won't be able to instantiate the model classes or create endpoints for the models or an OpenAPI document for them.
Development
Download the source code with this command:
git clone https://github.com/schmidtbri/rest-model-service
cd rest-model-service
Then create a virtual environment and activate it:
make venv
# on Macs
source venv/bin/activate
Install the dependencies:
make dependencies
Testing
To run the unit test suite execute these commands:
# first install the test dependencies
make test-dependencies
# run the test suite
make test
# clean up the unit tests
make clean-test
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file rest_model_service-0.6.0.tar.gz
.
File metadata
- Download URL: rest_model_service-0.6.0.tar.gz
- Upload date:
- Size: 17.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0107e8df4e8156fc630e5b7dfde8e072d4a70b8cb2d2e620b66b5079d6fcc711 |
|
MD5 | 45aabfb008b7c2e8d2527607fcfd9796 |
|
BLAKE2b-256 | 9c78dc736d9581308af79c77d5aebfd43277eae15e57ea3be14320ef53ab1c62 |
File details
Details for the file rest_model_service-0.6.0-py3-none-any.whl
.
File metadata
- Download URL: rest_model_service-0.6.0-py3-none-any.whl
- Upload date:
- Size: 16.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.18
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ab7800c38e5f2f8a2da9c86435250109c8a69ed610e402ecf2a0c5bb646eb7ed |
|
MD5 | b3e63ee4731eea897ebe65e0f5663cba |
|
BLAKE2b-256 | 9759adf74fa685e829cdeb84930a9910e479e955e7c1cbcd4c6b0928dfbfd343 |