Skip to main content

MLOps framework

Project description

MLOps Framework

This is a framework for MLOps. Deploys models as Cognite Data Fusion Functions or Google Cloud Run api's.

Getting Started:

Follow these steps:

  • Install package: pip install akerbp.mlops
  • Define ENV and COGNITE_* environmental variables as described in https://akerbp.atlassian.net/wiki/spaces/SIMDev/pages/1181384729/MLOps
  • Become familiar with the model template
  • Set up pipeline file in your repo (from your repo's root folder):
    from akerbp.mlops.core.setup import setup_pipeline
    setup_pipeline()
    
  • Copy config file mlops_settings.py from MLOps repo to your repo
  • Fill in user settings.
  • Model artifacts should not be committed to the repo.
  • Follow the file and folder structure (described later)
  • It's possible to have several models per repo: they need to be registered in the settings, and then they need to have their own model and test files.
  • Follow the import guidelines (described later)
  • Make sure the prediction service gets access to model artifacts (described later)
  • Add the new files to your repository, commit and push
  • Follow or request the Bitbucket setup (described later)

A this point every git push in master branch will trigger a deployment in the test environment. More information about the deployments pipelines is provided later.

User Guide

MLOps Files and Folders

These are the files and folders from the MLOps framework:

  • mlops_settings.py contains the user settings
  • Folder model_code is a model template included to show the model interface. It is not needed by the framework, but it is recommended to become familiar with it.
  • model_artifact stores the artifacts for the model shown in model_code. This is to help to test the model and learn the framework.
  • mlops contains deployment code
  • bitbucket-pipelines.yml describes the deployment pipeline in Bitbucket

Import Guidelines

The repo's root folder is the base folder when importing. For example, assume you have these files in the model code folder: model_code/model.py, model_code/helper.py and model_code/data.csv. If model.py needs to import helper.py, use: import model_code.helper. If model.py needs to read data.csv, the right path is os.path.join('model_code', 'data.csv').

Mlops library can of course be imported as well, e.g. its logger:

from akerbp.mlops.core import logger 
logging=logger.get_logger()
logging.debug("This is a debug log")

Files and Folders Structure

All the model code and files should be under a single folder. Required files:

  • model.py: implements our standard model interface
  • test_model.py: tests to verify that the model code is correct and to verify correct deployment
  • requirements.model: libraries needed (with specific version numbers), can't be called requirements.txt.

Projects with multiple models: models can be in different folders, but if there are common files this structure should be followed (when deploying a model the top folder of the path is chosen as parent model code folder):

  • models/model1/
  • models/model2/
  • models/common_code/

Services

We consider two types of services: prediction and training. Prediction services are deployed with model artifacts so that they are available at prediction time (downloading would require waiting time, and files written during run time consume ram memory). Services output has a status field ('ok' or 'error'). If they are 'ok', they have also a 'prediction'/'training' field. The former is determined by the predict method of the model, while the latter combines artifact metadata and model metadata produced by the train function. Prediction service has also a 'model_id' field to keep track of which model was used to predict.

Model Artifacts for the Prediction Service

Deployment of a prediction service requires a model artifact folder. Model artifacts are segregated by environment (e.g. only production models can be deployed to production). Model artifacts are versioned and stored in CDF Files together with user-defined metadata. Uploading a new model increases the version count by 1 for that model and environment. It's important not to delete model files manually, since that would mess with the model manager. When deploying a model service, the latest model version is chosen (however, we can discuss the possibility of deploying specific versions or filtering by metadata).

The general rule is that model artifacts have to be uploaded manually before deployment. If there are multiple models, you need to do this one at at time. Code example:

from akerbp.mlops.cdf.helpers import set_up_cdf_client
from akerbp.mlops.cdf.helpers import upload_new_model_version 

set_up_cdf_client()
metadata = train(model_dir, secrets) # or define it directly

folder_info = upload_new_model_version(
  model_name, 
  env,
  folder_path, 
  metadata
)

Note that model_name corresponds to one of the elements in model names defined in mlops_settings.py, env is the target environment (where the model should be available), folder_path is the local model artifact folder and metadata is a dictionary with artifact metadata, e.g. performance, git commit, etc. Each model update adds a new version (environment dependent) and note that updating a model doesn't modify the models used in existing prediction services.

Recommended process to update a model:

  1. New model features implemented in a feature branch
  2. New artifact generated and uploaded to test environment
  3. Feature branch merged with master
  4. Test deployment is triggered automatically: prediction service is deployed to test with the latest artifacts
  5. Prediction service in test is verified, and if things go well
  6. New artifact uploaded to prod environment
  7. Production deployment is triggered manually: prediction service is deployed to prod with the latest artifacts

However, in projects with a training service, you can rely on it to upload a first version of the model. The first prediction service deployment will fail, but you can deploy again after the training service has produced a model.

Another exception is that, when you deploy from the development environment (covered later in this document), the model artifacts in the settings file can point to existing local folders. These will then be used for the deployment. Version is then fixed to model_name/dev/1. Note that these artifacts are not uploaded to CDF Files.

Local Testing and Deployment

It's possible to tests the functions locally, which can help you debug errors quickly. This is recommended before a deployment. From your repo's root folder:

  • python -m pytest model_code (replace model_code by your model code folder name)
  • bash deploy_prediction_service.sh
  • bash deploy_training_service.sh (if there's a training service)

The first one will run your model tests. The last two run model tests but also the service tests implemented in the framework and simulate deployment.

If you really want to deploy from your development environment, you can run this: LOCAL_DEPLOYMENT=True bash deploy_prediction_service.sh

Automated Deployments from Bitbucket

Deployments to the test environment are triggered by commits (you need to push them). Deployments to the production environment are enabled manually from the Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master.

It is assumed that most projects won't include a training service. A branch that matches 'trainpred/*' deploys both prediction and training services. If a project includes both services, the pipeline file could instead be edited so that master deployed both services.

It is possible to schedule the training service in CDF, and then it can make sense to schedule the deployment pipeline of the model service (as often as new models are trained)

Bitbucket Setup

The following environments need to be defined in repository settings > deployments:

  • test deployments: test-prediction and test-training, each with ENV=test
  • production deployments: production-prediction and production-training, each with ENV=prod

The following need to be defined in respository settings > repository variables: COGNITE_API_KEY_DATA, COGNITE_API_KEY_FUNCTIONS, COGNITE_API_KEY_FILES

The pipeline needs to be enabled.

Developer Guide

Build and Upload Package

Edit setup.py file and note the following:

  • Register dependencies
  • Bash scripts will be installed in a bin folder in the PATH. Create an account in pypi, then create a token and a $HOME/.pypirc file. It's possible to build and upload the library from the development environment:
bash build.sh

However, there's no need to do that since the pipeline is setup to run that script before the service steps.

The library can be installed locally in developer mode (installed package links to the source code, so that it can be modified without the need to reinstall). From the package folder:

pip install -e .

Calling FastApi services

Bash: install httpie, then:

http -v POST http://127.0.0.1:8000/train data='{"x": [1,-1],"y":[1,0]}'

Python: challenging when posting nested json with requests. This works:

import requests, json
data = {"x":[1,-1], "y":[1,0]}
requests.post(model_api, json={'data': json.dumps(data)})

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

akerbp.mlops-0.20210205122142.tar.gz (21.8 kB view details)

Uploaded Source

Built Distribution

akerbp.mlops-0.20210205122142-py3-none-any.whl (26.0 kB view details)

Uploaded Python 3

File details

Details for the file akerbp.mlops-0.20210205122142.tar.gz.

File metadata

  • Download URL: akerbp.mlops-0.20210205122142.tar.gz
  • Upload date:
  • Size: 21.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.7

File hashes

Hashes for akerbp.mlops-0.20210205122142.tar.gz
Algorithm Hash digest
SHA256 14135672362163ec7d363f65f9c35893f65d026a78c04f133a45870025431ace
MD5 755959dcfd3d86ec7c870a75dd07a623
BLAKE2b-256 027f81023783ac539a0cc6277798b011ae41f1bbcea32dd50344c4faafa49fbc

See more details on using hashes here.

File details

Details for the file akerbp.mlops-0.20210205122142-py3-none-any.whl.

File metadata

  • Download URL: akerbp.mlops-0.20210205122142-py3-none-any.whl
  • Upload date:
  • Size: 26.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/53.0.0 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.7

File hashes

Hashes for akerbp.mlops-0.20210205122142-py3-none-any.whl
Algorithm Hash digest
SHA256 54ed7d01620958e5897f9106580610fab2e8439baf9c909290b6d8d6aa361af9
MD5 9d521bf019cb51a4b25421b3ad5959b7
BLAKE2b-256 d6388509693e80403abd2933d1133aecead6298f02b1ca4beb866578b2dee716

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page