Skip to main content

MLOps framework

Project description

MLOps Framework

This is a framework for MLOps that deploys models as functions in Cognite Data Fusion or api's in Google Cloud Run.

User Guide

Getting Started:

Follow these steps:

  • Install package: pip install akerbp.mlops
  • Set up pipeline file bitbucket-pipelines.yml and config file mlops_settings.yaml by running this command from your repo's root folder:
    python -m akerbp.mlops.deployment.setup
    
  • Fill in user settings and then validate them by running this (from repo root):
    from akerbp.mlops.core.config import validate_user_settings
    validate_user_settings()
    
    alternatively, run the setup again:
    python -m akerbp.mlops.deployment.setup
    
  • Commit the pipeline and settings files to your repo
  • Become familiar with the model template (see folder model_code) and make sure your model follows the same interface and file structure (described later)
  • Follow or request the Bitbucket setup (described later)

A this point every git push in master branch will trigger a deployment in the test environment. More information about the deployments pipelines is provided later.

Updating MLOps

Follow these steps:

  • Install a new version using pip, e.g. pip install akerbp.mlops==x
  • Run this command from your repo's root folder:
    python -m akerbp.mlops.deployment.setup
    
    That will update the pipeline and validate your settings. Commit changes and you're ready to go!

General Guidelines

Users should consider the following general guidelines:

  • Model artifacts should not be committed to the repo. Folder model_artifact does store model artifacts for the model defined in model_code, but it is just to help users understand the framework
  • Follow the recommended file and folder structure (described later)
  • There can be several models in your repo: they need to be registered in the settings, and then they need to have their own model and test files
  • Follow the import guidelines (described later)
  • Make sure the prediction service gets access to model artifacts (described later)

Configuration

MLOps configuration is stored in mlops_settings.yaml. Example for a project with a single model:

model_name: model1
model_file: model_code/model1.py
req_file: model_code/requirements.model
artifact_folder: model_artifact
test_file: model_code/test_model1.py
platform: cdf
info:
    prediction: &desc
        description: 'Description prediction service, model1'
        owner: data@science.com
    training:
        << : *desc
        description: 'Description training service, model1'

Field description:

  • model_name: a suitable name for your model. No spaces or dashes ("-") allowed.
  • model_file: model file path relative to the repo's root folder. All required model code should be under the top folder in that path (model_code in the example above).
  • req_file: model requirement file. Do not use .txt extension!
  • artifact_folder: model artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set to null if there is no model artifact.
  • test_file: test file to use. Set to null for no testing before deployment (not recommended).
  • platform: deployment platforms, either cdf (Cognite) or gc (Google).
  • info: description and owner information for the prediction and training services. Training field can be discarded if there's no such service. Note: all paths should be unix style, regardless of the platform.

If there are multiple models, model configuration should be separated using ---. Example:

model_name: model1
model_file: model_code/model1.py
(...)
--- # <- this separates model1 and model2 :)
model_name: model2
model_file: model_code/model2.py
(...)

Files and Folders Structure

All the model code and files should be under a single folder, e.g. model_code. Required files in this folder:

  • model.py: implements the standard model interface
  • test_model.py: tests to verify that the model code is correct and to verify correct deployment
  • requirements.model: libraries needed (with specific version numbers), can't be called requirements.txt. Add the MLOps framework like this:
    # requirements.model
    (...) # your other reqs
    akerbp.mlops==MLOPS_VERSION
    
    During deployment MLOPS_VERSION will be automatically replaced by the specific version that you have installed.

The following structure is recommended for projects with multiple models:

  • model_code/model1/
  • model_code/model2/
  • model_code/common_code/

This is because when deploying a model, e.g. model1, the top folder in the path (model_code in the example above) is copied and deployed, i.e. common_code folder (assumed to be needed by model1) is included. Note that model2 folder would also be deployed (this is assumed to be unnecessary but harmless).

Import Guidelines

The repo's root folder is the base folder when importing. For example, assume you have these files in the folder with model code:

  • model_code/model.py
  • model_code/helper.py
  • model_code/data.csv

If model.py needs to import helper.py, use: import model_code.helper. If model.py needs to read data.csv, the right path is os.path.join('model_code', 'data.csv').

It's of course possible to import from the Mlops package, e.g. its logger:

from akerbp.mlops.core import logger 
logging=logger.get_logger("logger_name")
logging.debug("This is a debug log")

Services

We consider two types of services: prediction and training.

Deployed services can be called with

from akerbp.mlops.xx.helpers import call_function
output = call_function(function_name, data)

Where xx is either 'cdf' or 'gc', and function_name follows the structure model-service-env:

  • model: model name given by the user (settings file)
  • service: either training or prediction
  • env: either dev, test or prod (depending on the deployment environment)

The output has a status field (ok or error). If they are 'ok', they have also a prediction or training field (depending on the type of service). The former is determined by the predict method of the model, while the latter combines artifact metadata and model metadata produced by the train function. Prediction services have also a model_id field to keep track of which model was used to predict.

Deployment Platform

Model services (described below) can be deployed to either CDF or GCR, independently.

CDF Specific Features

CDF Functions include metadata when they are called. This information can be used to redeploy a function (specifically, the file_id field). Example:

import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
cdf.redeploy_function(
  'function_name',
  file_id, 
  'Description', 
  'your@email.com'
)

The function name cannot be the same of a function that is currently deployed.

It's possible to query available functions (can be filtered by environment and/or tags). Example:

import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
all_functions = cdf.list_functions()
test_functions = cdf.list_functions(env="test")
tag_functions = cdf.list_functions(tags=["well_interpretation"])

Functions can be deleted. Example:

import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
cdf.delete_service("my_model-prediction-test")

Functions can be called in parallel. Example:

from akerbp.mlops.cdf.helpers import call_function_parallel
function_name = 'my_function-prediction-prod'
data = [dict(data='data_call_1'), dict(data='data_call_2')]
response1, response2 = call_function_parallel(function_name, data)

Model Artifacts for the Prediction Service

Prediction services are deployed with model artifacts so that they are available at prediction time (downloading would require waiting time, and files written during run time consume ram memory).

Model artifacts are segregated by environment (e.g. only production artifacts can be deployed to production). Model artifacts are versioned and stored in CDF Files together with user-defined metadata. Uploading a new model increases the version count by 1 for that model and environment. When deploying a model service, the latest model version is chosen (however, we can discuss the possibility of deploying specific versions or filtering by metadata).

The general rule is that model artifacts have to be uploaded manually to test (or dev) environment before deployment. Code example:

import akerbp.mlops.model_manager as mm

metadata = train(model_dir, secrets) # or define it directly
mm.setup()
folder_info = mm.upload_new_model_version(
  model_name, 
  env,
  folder_path, 
  metadata
)

If there are multiple models, you need to do this one at at time.

Model artifacts needs to be promoted to production environment, i.e. after they have been deployed successfully to test environment.

# After a model's version has been successfully deployed to test
import akerbp.mlops.model_manager as mm

mm.setup()
mm.promote_model('model', 'version')

This requires COGNITE_API_KEY_* environmental variables (see next section) or you can give a suitable key to the model manager setup function. Note that model_name corresponds to one of the elements in model_names defined in mlops_settings.py, env is the target environment (where the model should be available), folder_path is the local model artifact folder and metadata is a dictionary with artifact metadata, e.g. performance, git commit, etc. Each model update adds a new version (environment dependent) and note that updating a model doesn't modify the models used in existing prediction services.

Recommended process to update a model:

  1. New model features implemented in a feature branch
  2. New artifact generated and uploaded to test environment
  3. Feature branch merged with master
  4. Test deployment is triggered automatically: prediction service is deployed to test environment with the latest artifact version
  5. Prediction service in test is verified
  6. Artifact version is promoted manually from command line whenever suitable
  7. Production deployment is triggered manually from Bitbucket: prediction service is deployed to prod with the latest artifact version

Note that it's possible to get an overview of the model artifacts managed by the MLOps framework. Some examples (see function documentation for other possible queries):

import akerbp.mlops.model_manager as mm
mm.setup()
# all artifacts
folder_info = mm.get_model_version_overview() 
# all artifacts for a given model
folder_info = mm.get_model_version_overview(model_name='xx')

If the overview shows model artifacts that are not needed, it is possible to remove them. For example if artifact "my_model/dev/5" is not needed:

model_to_remove = "my_model/dev/5"
mm.delete_model_version(model_to_remove)

The model manager will by default show information on the artifact to delete and ask for user confirmation before proceeding. It's possible (but not recommended) to disable this check. There's no identity check, so it's possible to delete any model artifact (from other datas cientists). Be careful!

Other notes:

  • In projects with a training service, you can rely on it to upload a first version of the model. The first prediction service deployment will fail, but you can deploy again after the training service has produced a model.
  • When you deploy from the development environment (covered later in this document), the model artifacts in the settings file can point to existing local folders. These will then be used for the deployment. Version is then fixed to model_name/dev/1. Note that these artifacts are not uploaded to CDF Files.

Local Testing and Deployment

It's possible to tests the functions locally, which can help you debug errors quickly. This is recommended before a deployment.

Define the following environmental variables (e.g. in .bashrc):

export ENV=dev
export COGNITE_API_KEY_PERSONAL=xxx
export COGNITE_API_KEY_FUNCTIONS=$COGNITE_API_KEY_PERSONAL
export COGNITE_API_KEY_DATA=$COGNITE_API_KEY_PERSONAL
export COGNITE_API_KEY_FILES=$COGNITE_API_KEY_PERSONAL
export GOOGLE_PROJECT_ID=xxx # If deploying to Google Cloud Run

From your repo's root folder:

  • python -m pytest model_code (replace model_code by your model code folder name)
  • deploy_prediction_service.sh
  • deploy_training_service.sh (if there's a training service)

The first one will run your model tests. The last two run model tests but also the service tests implemented in the framework and simulate deployment.

If you really want to deploy from your development environment, you can run this: LOCAL_DEPLOYMENT=True deploy_prediction_service.sh

Note that, in case of emergency, it's possible to deploy to test or production from your local environment, e.g. : ENV=test deploy_prediction_service.sh

Automated Deployments from Bitbucket

Deployments to the test environment are triggered by commits (you need to push them). Deployments to the production environment are enabled manually from the Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master.

It is assumed that most projects won't include a training service. A branch that matches 'mlops/*' deploys both prediction and training services. If a project includes both services, the pipeline file could instead be edited so that master deployed both services.

It is possible to schedule the training service in CDF, and then it can make sense to schedule the deployment pipeline of the model service (as often as new models are trained)

Bitbucket Setup

The following environments need to be defined in repository settings > deployments:

  • test deployments: test-prediction and test-training, each with ENV=test
  • production deployments: production-prediction and production-training, each with ENV=prod

The following need to be defined in respository settings > repository variables: COGNITE_API_KEY_DATA, COGNITE_API_KEY_FUNCTIONS, COGNITE_API_KEY_FILES (these should be CDF keys with access to data, functions and files). If deployment to GCR is needed, you need in addition: ENABLE_GC_DEPLOYMENT (set to True), GOOGLE_SERVICE_ACCOUNT_FILE (content of the service account id file) and GOOGLE_PROJECT_ID (name of the project)

The pipeline needs to be enabled.

Developer/Admin Guide

MLOps Files and Folders

These are the files and folders in the MLOps repo:

  • src contains the MLOps framework package
  • mlops_settings.yaml contains the user settings for the dummy model
  • model_code is a model template included to show the model interface. It is not needed by the framework, but it is recommended to become familiar with it.
  • model_artifact stores the artifacts for the model shown in model_code. This is to help to test the model and learn the framework.
  • bitbucket-pipelines.yml describes the deployment pipeline in Bitbucket
  • build.sh is the script to build and upload the package
  • setup.py is used to build the package
  • LICENSE is the package's license

Build and Upload Package

Create an account in pypi, then create a token and a $HOME/.pypirc file. Edit setup.py file and note the following:

  • Dependencies need to be registered
  • Bash scripts will be installed in a bin folder in the PATH.

The pipeline is setup to build the library from Bitbucket, but it's possible to build and upload the library from the development environment as well:

bash build.sh

In general this is required before LOCAL_DEPLOYMENT=True bash deploy_xxx_service.sh. The exception is if local changes affect only the deployment part of the library, and the library has been installed in developer mode with:

pip install -e .

In this mode, the installed package links to the source code, so that it can be modified without the need to reinstall).

Bitbucket Setup

In addition to the user setup, the following is needed to build the package:

  • test-pypi: ENV=test, TWINE_USERNAME=__token__ and TWINE_PASSWORD (token generated from pypi)
  • prod-pypi: ENV=prod, TWINE_USERNAME=__token__ and TWINE_PASSWORD (token generated from pypi, can be the same as above)

Google Cloud Setup

In order to deploy to Google Cloud Run, you need to create a service account with the following rights:

  • Cloud Build Service Account
  • Service Account Admin
  • Service Account User
  • Cloud Run Admin
  • Viewer

You also need to create the CDF secret mlops-cdf-keys. It's a string that can be evaluated in python to get a dictionary (same used in the cdf helpers file). This is because:

  • It needs to be passed to prediction and training services
  • Model registry uses CDF Files in its

Calling FastApi services

Bash: install httpie, then:

http -v POST http://127.0.0.1:8000/train data='{"x": [1,-1],"y":[1,0]}'

Python: challenging when posting nested json with requests. This works:

import requests, json
data = {"x":[1,-1], "y":[1,0]}
requests.post(model_api, json={'data': json.dumps(data)})

Notes on the code

Service testing happens in an independent process (subprocess library) to avoid setup problems:

  • When deploying multiple models the service had to be reloaded before testing it, otherwise it would be the first model's service. Model initialization in the prediction service is designed to load artifacts only once in the process
  • If the model and the MLOps framework rely on different versions of the same library, the version would be changed during runtime, but the upgraded/downgraded version would not be available for the current process

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

akerbp.mlops-0.20211011124644.tar.gz (35.0 kB view details)

Uploaded Source

Built Distribution

akerbp.mlops-0.20211011124644-py3-none-any.whl (38.0 kB view details)

Uploaded Python 3

File details

Details for the file akerbp.mlops-0.20211011124644.tar.gz.

File metadata

  • Download URL: akerbp.mlops-0.20211011124644.tar.gz
  • Upload date:
  • Size: 35.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.1 requests/2.26.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.5

File hashes

Hashes for akerbp.mlops-0.20211011124644.tar.gz
Algorithm Hash digest
SHA256 a2ef9a78bfb969bf6af2cfd8bb4a9d8f934524e2ae312c70084b8bfe6372eb9f
MD5 38e23bfdeaa1f8466954adf2952efd20
BLAKE2b-256 f250e9616d39c3e6e571c289a7db0db328dfba05b48f1ae66e6086abfd371897

See more details on using hashes here.

File details

Details for the file akerbp.mlops-0.20211011124644-py3-none-any.whl.

File metadata

  • Download URL: akerbp.mlops-0.20211011124644-py3-none-any.whl
  • Upload date:
  • Size: 38.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.1 requests/2.26.0 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.5

File hashes

Hashes for akerbp.mlops-0.20211011124644-py3-none-any.whl
Algorithm Hash digest
SHA256 a27aa3c2953bf22b4976b20f0d38125a5db3182ec1d5015212c7967e019f5a55
MD5 bc9743aa3833a129ae8e11d68229fca5
BLAKE2b-256 227bfdc5fcd83522ef6d5dd66f1a541d7b09748e0f81da4ac5ee89a13c722bf2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page