MLOps framework
Project description
MLOps Framework
This is a framework for MLOps that deploys models as functions in Cognite Data Fusion or api's in Google Cloud Run.
User Guide
Reference guide
This assumes you are already familiar with the framework, and acts as a quick reference quide for deploying models using the prediction service, i.e. when model training is performed outside of the MLOps framwork.
- Train model to generate model artifacts
- Manually upload artifacts to your test environment
- This includes model artifacts generated during training, mapping- and settingsfile for the model, scaler object etc. Basically everything that is needed to preprocess the data and make predictions using the trained model.
- Deploy prediction service to test
- This is handled by the CI/CD pipeline in Bitbucket
- Manually promote model artifacts from test to production
- Manually trigger deployment of model to production
- Trigger in the CI/CD pipeline
- Call deployed model
- See section "Calling a deployed model prediction service hosted in CDF" below
Getting Started:
Follow these steps:
- Install package:
pip install akerbp.mlops
- Set up pipeline file
bitbucket-pipelines.yml
and config filemlops_settings.yaml
by running this command from your repo's root folder:python -m akerbp.mlops.deployment.setup
- Fill in user settings and then validate them by running this (from repo root):
from akerbp.mlops.core.config import validate_user_settings validate_user_settings()
alternatively, run the setup again:python -m akerbp.mlops.deployment.setup
- Commit the pipeline and settings files to your repo
- Become familiar with the model template (see folder
model_code
) and make sure your model follows the same interface and file structure (see Files and Folders Structure) - Follow or request the Bitbucket setup (described later)
A this point every git push in master branch will trigger a deployment in the test environment. More information about the deployments pipelines is provided later.
Updating MLOps
Follow these steps:
- Install a new version using pip, e.g.
pip install akerbp.mlops==x
, or upgrade your existing version to the latest release by runningpip install --upgrade akerbp.mlops
- Run this command from your repo's root folder:
python -m akerbp.mlops.deployment.setup
This will update the bitbucket pipeline with the newest release of akerbp.mlops and validate your settings. Once the settings are validated, commit changes and you're ready to go!
General Guidelines
Users should consider the following general guidelines:
- Model artifacts should not be committed to the repo. Folder
model_artifact
does store model artifacts for the model defined inmodel_code
, but it is just to help users understand the framework (see this section on how to handle model artifacts) - Follow the recommended file and folder structure (see this section)
- There can be several models in your repo: they need to be registered in the settings, and then they need to have their own model and test files
- Follow the import guidelines (see this section)
- Make sure the prediction service gets access to model artifacts (see this section)
Configuration
MLOps configuration is stored in mlops_settings.yaml
. Example for a project
with a single model:
model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
req_file: model_code/requirements.model
artifact_folder: model_artifact
test_file: model_code/test_model1.py
platform: cdf
dataset: mlops
info:
prediction: &desc
description: 'Description prediction service, model1'
metadata: '{
"required_input": "[ACS, RDEP, DEN]",
"training_wells": "[3/1-4]",
"input_types": "[int, float, string]",
"units": "[s/ft, 1, kg/m3]",
"output_curves": "[AC]",
"output_unit": "[s/ft]",
"petrel_exposure": "False",
"imputed": "True",
"num_filler": "-999.15",
"cat_filler": "UNKNOWN",
}'
owner: data@science.com
training:
<< : *desc
description: 'Description training service, model1'
metadata: '{"training_wells": ["3/1-4"],
"required_input": ["ACS", "RDEP", "DEN"],
"output_curves": ["AC"],
"hyperparameters": {"learning_rate": 1e-3, "batch_size": 100, "epochs": 10},
}'
Field | Description |
---|---|
model_name | a suitable name for your model. No spaces or dashes are allowed |
human_friendly_model_name | Name of function (in CDF) |
model_file | model file path relative to the repo's root folder. All required model code should be under the top folder in that path (model_code in the example above). |
req_file | model requirement file. Do not use .txt extension! |
artifact_folder | model artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set to null if there is no model artifact. |
test_file | test file to use. Set to null for no testing before deployment (not recommended). |
platform | deployment platforms, either cdf (Cognite) or gc (Google). |
dataset | CDF Dataset to use to read/write model artifacts (see Model Manager). Set to null is there is no dataset (not recommended). |
info | description, metadata and owner information for the prediction and training services. Training field can be discarded if there's no such service. |
Note: all paths should be unix style, regardless of the platform.
Notes on metadata: We need to specify the metadata under info as a dictionary with strings as keys and values, as CDF only allows strings for now. We are also limited to the following
- Keys can contain at most 16 characters
- Values can contain at most 512 characters
- At most 16 key-value pairs
- Maximum size of entire metadata field is 512 bytes
If there are multiple models, model configuration should be separated using
---
. Example:
model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
(...)
--- # <- this separates model1 and model2 :)
model_name: model2
human_friendly_model: 'My Second Model'
model_file: model_code/model2.py
(...)
Files and Folders Structure
All the model code and files should be under a single folder, e.g. model_code
.
Required files in this folder:
model.py
: implements the standard model interfacetest_model.py
: tests to verify that the model code is correct and to verify correct deploymentrequirements.model
: libraries needed (with specific version numbers), can't be calledrequirements.txt
. Add the MLOps framework like this:# requirements.model (...) # your other reqs akerbp.mlops==MLOPS_VERSION
During deployment,MLOPS_VERSION
will be automatically replaced by the specific version that you have installed locally. Make sure you have the latest release on your local machine prior to model deployment.
For the prediction service we require the model interface to have the following class and function
- initialization(), with required arguments
- path to artifact folder
- secrets
- these arguments can safely be set to None, and the framework will handle everything under the hood.
- only set path to artifact folder as None if not using any artifacts
- predict(), with required arguments
- data
- init_object (output from initialization() function)
- secrets
- You can safely put the secrets argument to None, and the framework will handle the secrets under the hood.
- ModelException class with inheritance from an Exception base class
For the training service we require the model interface to have the following class and function
- train(), with required arguments
- folder_path
- path to store model artifacts to be consumed by the prediction service
- folder_path
- ModelException class with inheritance from an Exception base class
The following structure is recommended for projects with multiple models:
model_code/model1/
model_code/model2/
model_code/common_code/
This is because when deploying a model, e.g. model1
, the top folder in the
path (model_code
in the example above) is copied and deployed, i.e.
common_code
folder (assumed to be needed by model1
) is included. Note that
model2
folder would also be deployed (this is assumed to be unnecessary but
harmless).
Import Guidelines
The repo's root folder is the base folder when importing. For example, assume you have these files in the folder with model code:
model_code/model.py
model_code/helper.py
model_code/data.csv
If model.py
needs to import helper.py
, use: import model_code.helper
. If
model.py
needs to read data.csv
, the right path is
os.path.join('model_code', 'data.csv')
.
It's of course possible to import from the Mlops package, e.g. its logger:
from akerbp.mlops.core import logger
logging=logger.get_logger("logger_name")
logging.debug("This is a debug log")
Services
We consider two types of services: prediction and training.
Deployed services can be called with
from akerbp.mlops.xx.helpers import call_function
output = call_function(external_id, data)
Where xx
is either 'cdf'
or 'gc'
, and external_id
follows the
structure model-service-env
:
model
: model name given by the user (settings file)service
: eithertraining
orprediction
env
: eitherdev
,test
orprod
(depending on the deployment environment)
The output has a status field (ok
or error
). If they are 'ok', they have
also a prediction
and prediction_file
or training
field (depending on the type of service). The
former is determined by the predict
method of the model, while the latter
combines artifact metadata and model metadata produced by the train
function.
Prediction services have also a model_id
field to keep track of which model
was used to predict.
See below for more details on how to call prediction services hosted in CDF.
Deployment Platform
Model services (described below) can be deployed to either CDF or GCR, independently.
CDF Specific Features
CDF Functions include metadata when they are called. This information can be
used to redeploy a function (specifically, the file_id
field). Example:
import akerbp.mlops.cdf.helpers as cdf
human_readable_name = "My model"
external_id = "my_model-prediction-test"
cdf.set_up_cdf_client('deploy')
cdf.redeploy_function(
human_readable_name
external_id,
file_id,
'Description',
'your@email.com'
)
Note that the external-id of a function needs to be unique, as this is used to distinguish functions betweeen services and hosting environment.
It's possible to query available functions (can be filtered by environment and/or tags). Example:
import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
all_functions = cdf.list_functions()
test_functions = cdf.list_functions(env="test")
tag_functions = cdf.list_functions(tags=["well_interpretation"])
Functions can be deleted. Example:
import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
cdf.delete_service("my_model-prediction-test")
Functions can be called in parallel. Example:
from akerbp.mlops.cdf.helpers import call_function_parallel
function_name = 'my_function-prediction-prod'
data = [dict(data='data_call_1'), dict(data='data_call_2')]
response1, response2 = call_function_parallel(function_name, data)
Model Manager
Model Manager is the module dedicated to managing the model artifacts used by prediction services (and generated by training services). This module uses CDF Files as backend.
Model artifacts are versioned and stored together with user-defined metadata. Uploading a new model increases the version count by 1 for that model and environment. When deploying a prediction service, the latest model version is chosen. It would be possible to extend the framework to allow deploying specific versions or filtering by metadata.
Model artifacts are segregated by environment (e.g. only production artifacts can be deployed to production). Model artifacts have to be uploaded manually to test (or dev) environment before deployment. Code example:
import akerbp.mlops.model_manager as mm
metadata = train(model_dir, secrets) # or define it directly
mm.setup()
folder_info = mm.upload_new_model_version(
model_name,
env,
folder_path,
metadata
)
If there are multiple models, you need to do this one at at time. Note that
model_name
corresponds to one of the elements in model_names
defined in
mlops_settings.py
, env
is the target environment (where the model should be
available), folder_path
is the local model artifact folder and metadata
is a
dictionary with artifact metadata, e.g. performance, git commit, etc.
Model artifacts needs to be promoted to the production environment (i.e. after they have been deployed successfully to test environment) so that a prediction service can be deployed in production.
# After a model's version has been successfully deployed to test
import akerbp.mlops.model_manager as mm
mm.setup()
mm.promote_model('model', 'version')
Each model artifact upload/promotion adds a new version (environment dependent) available in Model Manager. However, note that this doesn't modify the model artifacts used in existing prediction services (i.e. nothing changes in CDF Functions).
Recommended process to update a model artifact and prediction service:
- New model features implemented in a feature branch
- New artifact generated and uploaded to test environment
- Feature branch merged with master
- Test deployment is triggered automatically: prediction service is deployed to test environment with the latest artifact version (in test)
- Prediction service in test is verified
- Artifact version is promoted manually from command line whenever suitable
- Production deployment is triggered manually from Bitbucket: prediction service is deployed to production with the latest artifact version (in prod)
It's possible to get an overview of the model artifacts managed by Model
Manager. Some examples (see get_model_version_overview
documentation for other
possible queries):
import akerbp.mlops.model_manager as mm
mm.setup()
# all artifacts
folder_info = mm.get_model_version_overview()
# all artifacts for a given model
folder_info = mm.get_model_version_overview(model_name='xx')
If the overview shows model artifacts that are not needed, it is possible to remove them. For example if artifact "my_model/dev/5" is not needed:
model_to_remove = "my_model/dev/5"
mm.delete_model_version(model_to_remove)
Model Manager will by default show information on the artifact to delete and ask for user confirmation before proceeding. It's possible (but not recommended) to disable this check. There's no identity check, so it's possible to delete any model artifact (from other datas cientists). Be careful!
It's possible to download a model artifact (e.g. to verify its content). For example:
mm.download_model_version('model_name', 'test', 'artifact_folder', version=5)
If no version is specified, the latest one is downloaded by default.
By default, Model Manager assumes artifacts are stored in the mlops
dataset.
If your project uses a different one, you need to specify during setup (see
setup
function).
Further information:
- Model Manager requires
COGNITE_API_KEY_*
environmental variables (see next section) or a suitable key passed to thesetup
function. - In projects with a training service, you can rely on it to upload a first version of the model. The first prediction service deployment will fail, but you can deploy again after the training service has produced a model.
- When you deploy from the development environment (covered later in this
document), the model artifacts in the settings file can point to existing
local folders. These will then be used for the deployment. Version is then
fixed to
model_name/dev/1
. Note that these artifacts are not uploaded to CDF Files. - Prediction services are deployed with model artifacts (i.e. the artifact is copied to the project file used to create the CDF Function) so that they are available at prediction time. Downloading artifacts at run time would require waiting time, and files written during run time consume ram memory).
Calling a deployed model prediction service hosted in CDF
This section describes how you can call deployed models and obtain predictions for doing inference. We have two options for calling a function in CDF, either using the MLOps framework directly or by using the Cognite SDK. Independent of how you call your model, you have to pass the data as a dictionary with a key "data" containing a dictionary with your data, where the keys of the inner dictionary specifies the columns, and the values are list of samples for the correspondsing columns.
First, load your data and transform it to a dictionary as assumed by the framework.
import pandas as pd
data = pd.read_csv("path_to_data")
input_data = data.drop(columns=[target_variables])
data_dict = {"data": input_data.to_dict(orient=list), "to_file": True}
The "to_file" key of the input data dictionary specifies how the predictions can be extracted downstream. More details are provided below
Calling deployed model using MLOps:
- Set up a cognite client with sufficient access rights
- Extract the response directly by specifying the external-id of the model and passing your data as a dictionary
from akerbp.mlops.cdf.helpers import set_up_cdf_client, call_function
set_up_cdf_client(context="deploy") #access CDF data, files and functions with deploy context
response = call_function(function_name="<model_name>-prediction-<env>", data=data_dict)
Calling deployed model using the Cognite SDK:
- set up cognite client with suffient access rights
- Retreive model from CDF by specifying the external-id of the model
- Call the function
- Extract the function call response from the function call
from cognite.expeerimental import CogniteClient
client = CogniteClient(client_name="model inference") # pass an arbitrary client_name
function = client.functions.retrieve(external_id="<model_name>-prediction-<env>")
function_call = function.call(data=data_dict)
response = function_call.get_response()
Depending on how you specified the input dictionary, the predictions are available directly from the response or needs to be extracted from Cognite Files. If the input data dictionary contains a key "to_file" with value True, the predictions are uploaded to cognite Files, and the 'prediction_file' field in the reponse will contain a reference to the file containing the predictions. If "to_file" is set to False, or if the input dictionary does not contain such a key-value pair, the predictions are directly available through the function call response.
If "to_file" = True, we can extract the predictions using the following code-snippet
file_id = response["prediction_file"]
bytes_data = client.files.download_bytees(external_id=file_id)
predictions_df = pd.DataFrame.from_dict(json.loads(bytes_data))
Otherwise, the predictions are directly accessible from the response as follows.
predictions = response["predictions"]
Extracting metadata from deployed model in CDF
Once a model is deployed, a user can extract potentially valuable metadata as follows.
my_function = client.functions.list(name="My model", external_id_prefix="my_model-prediction-test")
metadata = my_function[0].metadata
Where the metadata corresponds to whatever you specified in the mlops_settings.yaml file. For this example we get the following metadata
{'cat_filler': 'UNKNOWN',
'imputed': 'True',
'input_types': '[int, float, string]',
'num_filler': '-999.15',
'output_curves': '[AC]',
'output_unit': '[s/ft]',
'petrel_exposure': 'False',
'required_input': '[ACS, RDEP, DEN]',
'training_wells': '[3/1-4]',
'units': '[s/ft, 1, kg/m3]'}
Local Testing and Deployment
It's possible to tests the functions locally, which can help you debug errors quickly. This is recommended before a deployment.
Define the following environmental variables (e.g. in .bashrc
):
export ENV=dev
export COGNITE_API_KEY_PERSONAL=xxx
export COGNITE_API_KEY_FUNCTIONS=$COGNITE_API_KEY_PERSONAL
export COGNITE_API_KEY_DATA=$COGNITE_API_KEY_PERSONAL
export COGNITE_API_KEY_FILES=$COGNITE_API_KEY_PERSONAL
export GOOGLE_PROJECT_ID=xxx # If deploying to Google Cloud Run
From your repo's root folder:
python -m pytest model_code
(replacemodel_code
by your model code folder name)deploy_prediction_service.sh
deploy_training_service.sh
(if there's a training service)
The first one will run your model tests. The last two run model tests but also the service tests implemented in the framework and simulate deployment.
If you really want to deploy from your development environment, you can run
this: LOCAL_DEPLOYMENT=True deploy_prediction_service.sh
Note that, in case of emergency, it's possible to deploy to test or production
from your local environment, e.g. : ENV=test deploy_prediction_service.sh
Automated Deployments from Bitbucket
Deployments to the test environment are triggered by commits (you need to push them). Deployments to the production environment are enabled manually from the Bitbucket pipeline dashboard. Branches that match 'deploy/' behave as master. Branches that match 'feature/' run tests only (i.e. do not deploy).
It is assumed that most projects won't include a training service. A branch that matches 'mlops/*' deploys both prediction and training services. If a project includes both services, the pipeline file could instead be edited so that master deployed both services.
It is possible to schedule the training service in CDF, and then it can make sense to schedule the deployment pipeline of the model service (as often as new models are trained)
Bitbucket Setup
The following environments need to be defined in repository settings > deployments
:
- test deployments:
test-prediction
andtest-training
, each withENV=test
- production deployments:
production-prediction
andproduction-training
, each withENV=prod
The following need to be defined in respository settings > repository variables
: COGNITE_API_KEY_DATA
, COGNITE_API_KEY_FUNCTIONS
,
COGNITE_API_KEY_FILES
(these should be CDF keys with access to data, functions
and files). If deployment to GCR is needed, you need in addition:
ENABLE_GC_DEPLOYMENT
(set to True
), GOOGLE_SERVICE_ACCOUNT_FILE
(content
of the service account id file) and GOOGLE_PROJECT_ID
(name of the project)
The pipeline needs to be enabled.
Developer/Admin Guide
MLOps Files and Folders
These are the files and folders in the MLOps repo:
src
contains the MLOps framework packagemlops_settings.yaml
contains the user settings for the dummy modelmodel_code
is a model template included to show the model interface. It is not needed by the framework, but it is recommended to become familiar with it.model_artifact
stores the artifacts for the model shown inmodel_code
. This is to help to test the model and learn the framework.bitbucket-pipelines.yml
describes the deployment pipeline in Bitbucketbuild.sh
is the script to build and upload the packagesetup.py
is used to build the packageLICENSE
is the package's license
CDF Datasets
In order to control access to the artifacts:
- Set up a CDF Dataset with
write_protected=True
and aexternal_id
, which by default is expected to bemlops
. - Create a group of owners (CDF Dashboard), i.e. those that should have write access
Build and Upload Package
Create an account in pypi, then create a token and a $HOME/.pypirc
file. Edit
setup.py
file and note the following:
- Dependencies need to be registered
- Bash scripts will be installed in a
bin
folder in thePATH
.
The pipeline is setup to build the library from Bitbucket, but it's possible to build and upload the library from the development environment as well:
bash build.sh
In general this is required before LOCAL_DEPLOYMENT=True bash deploy_xxx_service.sh
. The exception is if local changes affect only the
deployment part of the library, and the library has been installed in developer
mode with:
pip install -e .
In this mode, the installed package links to the source code, so that it can be modified without the need to reinstall).
Bitbucket Setup
In addition to the user setup, the following is needed to build the package:
test-pypi
:ENV=test
,TWINE_USERNAME=__token__
andTWINE_PASSWORD
(token generated from pypi)prod-pypi
:ENV=prod
,TWINE_USERNAME=__token__
andTWINE_PASSWORD
(token generated from pypi, can be the same as above)
Google Cloud Setup
In order to deploy to Google Cloud Run, you need to create a service account with the following rights:
- Cloud Build Service Account
- Service Account Admin
- Service Account User
- Cloud Run Admin
- Viewer
You also need to create the CDF secret mlops-cdf-keys
. It's a string that can
be evaluated in python to get a dictionary (same used in the cdf helpers file).
This is because:
- It needs to be passed to prediction and training services
- Model registry uses CDF Files in its
Calling FastApi services
Bash: install httpie, then:
http -v POST http://127.0.0.1:8000/train data='{"x": [1,-1],"y":[1,0]}'
Python: challenging when posting nested json with requests. This works:
import requests, json
data = {"x":[1,-1], "y":[1,0]}
requests.post(model_api, json={'data': json.dumps(data)})
Notes on the code
Service testing happens in an independent process (subprocess library) to avoid setup problems:
- When deploying multiple models the service had to be reloaded before testing it, otherwise it would be the first model's service. Model initialization in the prediction service is designed to load artifacts only once in the process
- If the model and the MLOps framework rely on different versions of the same library, the version would be changed during runtime, but the upgraded/downgraded version would not be available for the current process
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file akerbp.mlops-0.20220302082415.tar.gz
.
File metadata
- Download URL: akerbp.mlops-0.20220302082415.tar.gz
- Upload date:
- Size: 45.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.8.2 requests/2.27.1 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.63.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b5b8c523f057b50a9bc9cbfc79c58b22c75e198d78220a8b52bc3f334bd08144 |
|
MD5 | cdbcac8b888f653f221e331afe27a245 |
|
BLAKE2b-256 | b788927bdf80a45e1ed731b47763ed9b04229a50f41e00e6bf77e7c3eedad4b0 |
File details
Details for the file akerbp.mlops-0.20220302082415-py3-none-any.whl
.
File metadata
- Download URL: akerbp.mlops-0.20220302082415-py3-none-any.whl
- Upload date:
- Size: 45.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.8.2 requests/2.27.1 setuptools/50.3.0 requests-toolbelt/0.9.1 tqdm/4.63.0 CPython/3.8.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 58c09b7b1ff698fdc0c69ad5e8e4a176af5554b19cbc8d57515c4c1f290a804c |
|
MD5 | ccc8938e8406795ef33207fc7ebb15fa |
|
BLAKE2b-256 | 942542e7f1b58179c268e049be8191a2fa100d6654263fa06e8013966952cf03 |