Skip to main content

AkerBP MLOps framework

Project description

MLOps Framework

This is a framework for MLOps that deploys models as functions in Cognite Data Fusion

User Guide

Reference guide

This assumes you are already familiar with the framework, and acts as a quick reference guide for deploying models using the prediction service, i.e. when model training is performed outside of the MLOps framework.

  1. Train model to generate model artifacts
  2. Manually upload artifacts to your test environment
    • This includes model artifacts generated during training, mapping- and settings-file for the model, scaler object etc. Basically everything that is needed to preprocess the data and make predictions using the trained model.
  3. Deploy prediction service to test
    • This is handled by the CI/CD pipeline on GitHub
  4. Manually promote model artifacts from test to production
  5. Manually trigger deployment of model to production
    • Trigger in the CI/CD pipeline
  6. Call deployed model
    • See section "Calling a deployed model prediction service hosted in CDF" below

Getting Started:

Follow these steps (in the context of your virtual environment):

  • Install package: pip install akerbp-mlops[cdf] (On some OSes you may need to escape the brackets by doing so pip install "akerbp-mlops[cdf]")
  • Set up pipeline files .github/workflows/main.yml and config file mlops_settings.yaml by running this command from your repo's root folder:
    python -m akerbp.mlops.deployment.setup
    
  • Fill in user settings and then validate them by running this (from repo root):
    python -c "from akerbp.mlops.core.config import validate_user_settings; validate_user_settings()"
    
    alternatively, run the setup again:
    python -m akerbp.mlops.deployment.setup
    
  • Commit the pipeline and settings files to your repo
  • Become familiar with the model template (see folder model_code) and make sure your model follows the same interface and file structure (see Files and Folders Structure)

A this point every git push in master branch will trigger a deployment in the test environment. More information about the deployments pipelines is provided later.

Updating MLOps

Follow these steps:

  • Install a new version using pip, e.g. pip install akerbp-mlops[cdf]==x, or upgrade your existing version to the latest release by running pip install --upgrade akerbp-mlops[cdf]
  • Run this command from your repo's root folder:
    python -m akerbp.mlops.deployment.setup
    
    This will update the GitHub pipeline with the newest release of akerbp.mlops and validate your settings. Once the settings are validated, commit changes and you're ready to go!

General Guidelines

Users should consider the following general guidelines:

  • Model artifacts should not be committed to the repo. Folder model_artifact does store model artifacts for the model defined in model_code, but it is just to help users understand the framework (see this section on how to handle model artifacts)
  • Follow the recommended file and folder structure (see this section)
  • There can be several models in your repo: they need to be registered in the settings, and then they need to have their own model and test files
  • Follow the import guidelines (see this section)
  • Make sure the prediction service gets access to model artifacts (see this section)

Configuration

MLOps configuration is stored in mlops_settings.yaml. Example for a project with a single model:

model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
req_file: model_code/requirements.model
artifact_folder: model_artifact
artifact_version: 1 # Optional
test_file: model_code/test_model1.py
platform: cdf
dataset: mlops
python_version: py39
helper_models:
  - my_helper_model
info:
    prediction: &desc
        description: 'Description prediction service, model1'
        metadata:
          required_input:
            - ACS
            - RDEP
            - DEN
          training_wells:
            - 3/14
            - 2/7-18
          input_types:
            - int
            - float
            - string
          units:
            - s/ft
            - 1
            - kg/m3
          output_curves:
            - AC
          output_units:
            - s/ft
          petrel_exposure: False
          imputed: True
          num_filler: -999.15
          cat_filler: UNKNOWN
        owner: data@science.com
    training:
        << : *desc
        description: 'Description training service, model1'
        metadata:
          required_input:
            - ACS
            - RDEP
            - DEN
          output_curves:
            - AC
          hyperparameters:
            learning_rate: 1e-3
            batch_size: 100
            epochs: 10
Field Description
model_name a suitable name for your model. No spaces or dashes are allowed
human_friendly_model_name Name of function (in CDF)
model_file model file path relative to the repo's root folder. All required model code should be under the top folder in that path (model_code in the example above).
req_file model requirement file. Do not use .txt extension!
artifact_folder model artifact folder. It can be the name of an existing local folder (note that it should not be committed to the repo). In that case it will be used in local deployment. It still needs to be uploaded/promoted with the model manager so that it can be used in Test or Prod. If the folder does not exist locally, the framework will try to create that folder and download the artifacts there. Set to null if there is no model artifact.
artifact_version (optional) artifact version number to use during deployment. Defaults to the latest version if not specified
test_file test file to use. Set to null for no testing before deployment (not recommended).
platform deployment platforms, either cdf (Cognite) or local for local testing.
python_version If platform is set to cdf, the python_version required by the model to be deployed needs to be specified. Available versions can be found here
helper_models Array of helper models using for feature engineering during preprocessing. During deployment, iterate through this list and check that helper model requirements are the same as the main model. For now we only check for akerbp-mlpet
dataset CDF Dataset to use to read/write model artifacts (see Model Manager). Set to null is there is no dataset (not recommended).
info description, metadata and owner information for the prediction and training services. Training field can be discarded if there's no such service.

Note: all paths should be unix style, regardless of the platform.

Notes on metadata: We need to specify the metadata under info as a dictionary with strings as keys and values, as CDF only allows strings for now. We are also limited to the following

  • Keys can contain at most 16 characters
  • Values can contain at most 512 characters
  • At most 16 key-value pairs
  • Maximum size of entire metadata field is 512 bytes

If there are multiple models, model configuration should be separated using ---. Example:

model_name: model1
human_friendly_model_name: 'My First Model'
model_file: model_code/model1.py
(...)
--- # <- this separates model1 and model2 :)
model_name: model2
human_friendly_model: 'My Second Model'
model_file: model_code/model2.py
(...)

Files and Folders Structure

All the model code and files should be under a single folder, e.g. model_code. Required files in this folder:

  • model.py: implements the standard model interface
  • test_model.py: tests to verify that the model code is correct and to verify correct deployment
  • requirements.model: libraries needed (with specific version numbers), can't be called requirements.txt. Add the MLOps framework like this:
    # requirements.model
    (...) # your other reqs
    akerbp-mlops==MLOPS_VERSION
    
    During deployment, MLOPS_VERSION will be automatically replaced by the specific version that you have installed locally. Make sure you have the latest release on your local machine prior to model deployment.

For the prediction service we require the model interface to have the following class and function

  • initialization(), with required arguments
    • path to artifact folder
    • secrets
      • these arguments can safely be set to None, and the framework will handle everything under the hood.
      • only set path to artifact folder as None if not using any artifacts
  • predict(), with required arguments
    • data
    • init_object (output from initialization() function)
    • secrets
      • You can safely put the secrets argument to None, and the framework will handle the secrets under the hood.
  • ModelException class with inheritance from an Exception base class

For the training service we require the model interface to have the following class and function

  • train(), with required arguments
    • folder_path
      • path to store model artifacts to be consumed by the prediction service
  • ModelException class with inheritance from an Exception base class

The following structure is recommended for projects with multiple models:

  • model_code/model1/
  • model_code/model2/
  • model_code/common_code/

This is because when deploying a model, e.g. model1, the top folder in the path (model_code in the example above) is copied and deployed, i.e. common_code folder (assumed to be needed by model1) is included. Note that model2 folder would also be deployed (this is assumed to be unnecessary but harmless).

Import Guidelines

The repo's root folder is the base folder when importing. For example, assume you have these files in the folder with model code:

  • model_code/model.py
  • model_code/helper.py
  • model_code/data.csv

If model.py needs to import helper.py, use: import model_code.helper. If model.py needs to read data.csv, the right path is os.path.join('model_code', 'data.csv').

It's of course possible to import from the Mlops package, e.g. its logger:

from akerbp.mlops.core import logger
logging=logger.get_logger("logger_name")
logging.debug("This is a debug log")

Services

We consider two types of services: prediction and training.

Deployed services can be called with

from akerbp.mlops.xx.helpers import call_function
output = call_function(external_id, data)

Where xx is either 'cdf' or 'gc', and external_id follows the structure model-service-model_env:

  • model: model name given by the user (settings file)
  • service: either training or prediction
  • model_env: either dev, test or prod (depending on the deployment environment)

The output has a status field (ok or error). If they are 'ok', they have also a prediction and prediction_file or training field (depending on the type of service). The former is determined by the predict method of the model, while the latter combines artifact metadata and model metadata produced by the train function. Prediction services have also a model_id field to keep track of which model was used to predict.

See below for more details on how to call prediction services hosted in CDF.

Deployment Platform

Model services (described below) can be deployed to CDF, i.e. Cognite Data Fusion or Google Cloud Run. The deployment platform is specified in the settings file.

CDF Functions include metadata when they are called. This information can be used to redeploy a function (specifically, the file_id field). Example:

import akerbp.mlops.cdf.helpers as cdf

human_readable_name = "My model"
external_id = "my_model-prediction-test"

cdf.set_up_cdf_client('deploy')
cdf.redeploy_function(
  human_readable_name
  external_id,
  file_id,
  'Description',
  'your@email.com'
)

Note that the external-id of a function needs to be unique, as this is used to distinguish functions between services and hosting environment.

It's possible to query available functions (can be filtered by environment and/or tags). Example:

import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
all_functions = cdf.list_functions()
test_functions = cdf.list_functions(model_env="test")
tag_functions = cdf.list_functions(tags=["well_interpretation"])

Functions can be deleted. Example:

import akerbp.mlops.cdf.helpers as cdf
cdf.set_up_cdf_client('deploy')
cdf.delete_service("my_model-prediction-test")

Functions can be called in parallel. Example:

from akerbp.mlops.cdf.helpers import call_function_parallel
function_name = 'my_function-prediction-prod'
data = [dict(data='data_call_1'), dict(data='data_call_2')]
response1, response2 = call_function_parallel(function_name, data)

#TODO - Document common use cases for GCR

Model Manager

Model Manager is the module dedicated to managing the model artifacts used by prediction services (and generated by training services). This module uses CDF Files as backend.

Model artifacts are versioned and stored together with user-defined metadata. Uploading a new model increases the version count by 1 for that model and environment. When deploying a prediction service, the latest model version is chosen. It would be possible to extend the framework to allow deploying specific versions or filtering by metadata.

Model artifacts are segregated by environment (e.g. only production artifacts can be deployed to production). Model artifacts have to be uploaded manually to test (or dev) environment before deployment. Code example:

import akerbp.mlops.model_manager as mm

metadata = train(model_dir, secrets) # or define it directly
mm.setup()
folder_info = mm.upload_new_model_version(
  model_name,
  model_env,
  folder_path,
  metadata
)

If there are multiple models, you need to do this one at at time. Note that model_name corresponds to one of the elements in model_names defined in mlops_settings.py, model_env is the target environment (where the model should be available), folder_path is the local model artifact folder and metadata is a dictionary with artifact metadata, e.g. performance, git commit, etc.

Model artifacts needs to be promoted to the production environment (i.e. after they have been deployed successfully to test environment) so that a prediction service can be deployed in production.

# After a model's version has been successfully deployed to test
import akerbp.mlops.model_manager as mm

mm.setup()
mm.promote_model('model', 'version')

Versioning

Each model artifact upload/promotion increments a version number (environment dependent) available in Model Manager. However, this doesn't modify the model artifacts used in existing prediction services (i.e. nothing changes in CDF Functions). To reflect the newly uploaded/promoted model artifacts in the existing services one need to deploy the services again. Note that we dont have to specify the artifact version explicitly if we want to deploy using the latest artifacts, as this is done by default.

Recommended process to update a model artifact and prediction service:

  1. New model features implemented in a feature branch
  2. New artifact generated and uploaded to test environment
  3. Feature branch merged with master
  4. Test deployment is triggered automatically: prediction service is deployed to test environment with the latest artifact version (in test)
  5. Prediction service in test is verified
  6. Artifact version is promoted manually from command line whenever suitable
  7. Production deployment is triggered manually from GitHub: prediction service is deployed to production with the latest artifact version (in prod)

It's possible to get an overview of the model artifacts managed by Model Manager. Some examples (see get_model_version_overview documentation for other possible queries):

import akerbp.mlops.model_manager as mm
mm.setup()
# all artifacts
folder_info = mm.get_model_version_overview()
# all artifacts for a given model
folder_info = mm.get_model_version_overview(model_name='xx')

If the overview shows model artifacts that are not needed, it is possible to remove them. For example if artifact "my_model/dev/5" is not needed:

model_to_remove = "my_model/dev/5"
mm.delete_model_version(model_to_remove)

Model Manager will by default show information on the artifact to delete and ask for user confirmation before proceeding. It's possible (but not recommended) to disable this check. There's no identity check, so it's possible to delete any model artifact (from other data scientist). Be careful!

It's possible to download a model artifact (e.g. to verify its content). For example:

mm.download_model_version('model_name', 'test', 'artifact_folder', version=5)

If no version is specified, the latest one is downloaded by default.

By default, Model Manager assumes artifacts are stored in the mlops dataset. If your project uses a different one, you need to specify during setup (see setup function).

Further information:

  • Model Manager requires specific environmental variables (see next section) or a suitable secrets to be passed to the setup function.
  • In projects with a training service, you can rely on it to upload a first version of the model. The first prediction service deployment will fail, but you can deploy again after the training service has produced a model.
  • When you deploy from the development environment (covered later in this document), the model artifacts in the settings file can point to existing local folders. These will then be used for the deployment. Version is then fixed to model_name/dev/1. Note that these artifacts are not uploaded to CDF Files.
  • Prediction services are deployed with model artifacts (i.e. the artifact is copied to the project file used to create the CDF Function) so that they are available at prediction time. Downloading artifacts at run time would require waiting time, and files written during run time consume ram memory).

Model versioning

To allow for model versioning and rolling back to previous model deployments, the external id of the functions (in CDF) includes a version number that is reflected by the latest artifact version number when deploying the function (see above). Everytime we upload/promote new model artifacts and deploy our services, the version number of the external id of the functions representing the services are incremented (just as the version number for the artifacts).

To distinguish the latest model from the remaining model versions, we redeploy the latest model version using a predictable external id that does not contain the version number. By doing so we relieve the clients need of dealing with version numbers, and they will call the latest model by default. For every new deployment, we will thus have two model deployments - one with the version number, and one without the version number in the external id. However, the predictable external id is persisted across new model versions, so when deploying a new version the latest one, with the predictable external id, is simply overwritten.

We are thus concerned with two structures for the external id

  • <model_name>-<service>-<model_env>-<version> for rolling back to previous versions, and
  • <model_name>-<service>-<model_env> for the latest deployed model

For the latest model with a predictable external id, we tag the description of the model to specify that the model is in fact the latest version, and add the version number to the function metadata.

We can now list out multiple models with the same model name and external id prefix, and choose to make predictions and do inference with a specific model version. An example is shown below.

# List all prediction services (i.e. models) with name "My Model" hosted in the test environment, and model corresponding to the first element of the list
from akerbp.mlops.cdf.helpers import get_client
client = get_client(client_id=<client_id>, client_secret=<client_secret>)
my_models = client.functions.list(name="My Model", external_id_prefix="mymodel-prediction-test")
my_model_specific_version = my_models[0]

Calling a deployed model prediction service hosted in CDF

This section describes how you can call deployed models and obtain predictions for doing inference. We have two options for calling a function in CDF, either using the MLOps framework directly or by using the Cognite SDK. Independent of how you call your model, you have to pass the data as a dictionary with a key "data" containing a dictionary with your data, where the keys of the inner dictionary specifies the columns, and the values are list of samples for the corresponding columns.

First, load your data and transform it to a dictionary as assumed by the framework. Note that the data dictionary you pass to the function might vary based on your model interface. Make sure to align with what you specified in your model.py interface.

import pandas as pd
data = pd.read_csv("path_to_data")
input_data = data.drop(columns=[target_variables])
data_dict = {"data": input_data.to_dict(orient=list), "to_file": True}

The "to_file" key of the input data dictionary specifies how the predictions can be extracted downstream. More details are provided below

Calling deployed model using MLOps:

  1. Set up a cognite client with sufficient access rights
  2. Extract the response directly by specifying the external id of the model and passing your data as a dictionary
    • Note that the external id is on the form
      • "<model_name>-<service>-<model_env>-<version>", and
      • "<model_name>-<service>-<model_env>"

Use the latter external id if you want to call the latest model. The former external id can be used if you want to call a previous version of your model.

from akerbp.mlops.cdf.helpers import set_up_cdf_client, call_function
set_up_cdf_client(context="deploy") #access CDF data, files and functions with deploy context
response = call_function(function_name="<model_name>-prediction-<model_env>", data=data_dict)

Calling deployed model using the Cognite SDK:

  1. set up cognite client with sufficient access rights
  2. Retreive model from CDF by specifying the external-id of the model
  3. Call the function
  4. Extract the function call response from the function call
from akerbp.mlops.cdf.helpers import get_client
client = get_client(client_id=<client_id>, client_secret=<client_secret>)
client = CogniteClient(config=cnf)
function = client.functions.retrieve(external_id="<model_name>-prediction-<model_env>")
function_call = function.call(data=data_dict)
response = function_call.get_response()

Depending on how you specified the input dictionary, the predictions are available directly from the response or needs to be extracted from Cognite Files. If the input data dictionary contains a key "to_file" with value True, the predictions are uploaded to cognite Files, and the 'prediction_file' field in the response will contain a reference to the file containing the predictions. If "to_file" is set to False, or if the input dictionary does not contain such a key-value pair, the predictions are directly available through the function call response.

If "to_file" = True, we can extract the predictions using the following code-snippet

file_id = response["prediction_file"]
bytes_data = client.files.download_bytes(external_id=file_id)
predictions_df = pd.DataFrame.from_dict(json.loads(bytes_data))

Otherwise, the predictions are directly accessible from the response as follows.

predictions = response["predictions"]

Extracting metadata from deployed model in CDF

Once a model is deployed, a user can extract potentially valuable metadata as follows.

my_function = client.functions.retrieve(external_id="my_model-prediction-test")
metadata = my_function.metadata

Where the metadata corresponds to whatever you specified in the mlops_settings.yaml file. For this example we get the following metadata

{'cat_filler': 'UNKNOWN',
 'imputed': 'True',
 'input_types': '[int, float, string]',
 'num_filler': '-999.15',
 'output_curves': '[AC]',
 'output_unit': '[s/ft]',
 'petrel_exposure': 'False',
 'required_input': '[ACS, RDEP, DEN]',
 'training_wells': '[3/1-4]',
 'units': '[s/ft, 1, kg/m3]'}

Local Testing and Deployment

It's possible to tests the functions locally, which can help you debug errors quickly. This is recommended before a deployment.

Define the following environmental variables (e.g. in .bashrc):

export MODEL_ENV=dev
export COGNITE_OIDC_BASE_URL=https://api.cognitedata.com
export COGNITE_TENANT_ID=<tenant id>
export COGNITE_CLIENT_ID_WRITE=<write access client id>
export COGNITE_CLIENT_SECRET_WRITE=<write access client secret>
export COGNITE_CLIENT_ID_READ=<read access client id>
export COGNITE_CLIENT_SECRET_READ=<read access client secret>

From your repo's root folder:

  • python -m pytest model_code (replace model_code by your model code folder name)
  • deploy_prediction_service
  • deploy_training_service (if there's a training service)

The first one will run your model tests. The last two run model tests but also the service tests implemented in the framework and simulate deployment.

If you want to run tests only you need to set TESTING_ONLY=True before calling the deployment script.

Automated Deployments from Bitbucket

Deployments to the test environment are triggered by commits (you need to push them). Deployments to the production environment are enabled manually from the Bitbucket pipeline dashboard. Branches that match 'deploy/*' behave as master. Branches that match feature/* run tests only (i.e. do not deploy).

It is assumed that most projects won't include a training service. A branch that matches 'mlops/*' deploys both prediction and training services. If a project includes both services, the pipeline file could instead be edited so that master deployed both services.

It is possible to schedule the training service in CDF, and then it can make sense to schedule the deployment pipeline of the model service (as often as new models are trained)

NOTE: Previous version of akerbp-mlops assumes that calling LOCAL_DEPLOYMENT=True deploy_prediction_service will not deploy models and run tests. The package is now refactored to only trigger tests when the environment variable TESTING_ONLY is set to True. Make sure to update the pipeline definition for branches with prefix feature/to call TESTING_ONLY=True deploy_prediction_service instead.

GitHub Setup

The following environments need to be defined in repository settings > deployments:

  • dev: where two environment variables are defined
    • MODEL_ENV=dev
    • SERVICE_NAME=prediction
  • test: where two environment variables are defined
    • MODEL_ENV=test
    • SERVICE_NAME=prediction
  • prod: where two environment variables are defined
    • MODEL_ENV=prod
    • SERVICE_NAME=predictione

The following secrets need to be defined in repository settings > Secrets and variables > Actions > Repository secrets:

  • COGNITE_CLIENT_ID_WRITE
  • COGNITE_CLIENT_SECRET_WRITE
  • COGNITE_CLIENT_ID_READ
  • COGNITE_CLIENT_SECRET_READ
  • COGNITE_OIDC_BASE_URL
  • COGNITE_TENANT_ID (these should be CDF client id and secrets for respective read and write access).

GitHub Actions need to be enabled on the repo.

Developer/Admin Guide

This package is managed using poetry. Please refer to the poetry documentation for more information on how to use poetry and install it

Installation

To install the package, run the following command from the root folder of the repo

poetry install -E cdf --with=dev,pre-commit,version,test

Poetry uses groups to manage dependencies. The above command installs the package with all the defined groups in the toml file.

Package versioning

The versioning of the package follows SemVer, using the MAJOR.MINOR.PATCH structure. We are thus updating the package version using the following convention

  1. Increment MAJOR when making incompatible API changes
  2. Increment MINOR when adding backwards compatible functionality
  3. Increment PATCH when making backwards compatible bug-fixes

The version is updated based on the latest commit to the repo, and we are currently using the following rules.

  • The MAJOR version is incremented if the commit message includes the word major
  • The MINOR version is incremented if the commit message includes the word minor
  • The PATCH number is incremented if neither major nor minor if found in the commit message
  • If the commit message includes the phrase prerelease, the package version is extended with a, thus taking the form MAJOR.MINOR.PATCHa.

Note that the above keywords are not case sensitive. Moreover, major takes precedence over minor, so if both keywords are found in the commit message, the MAJOR version is incremented and the MINOR version is kept unchanged.

In dev and test environment, we release the package using the pre-release tag, and the package takes the following version number MAJOR.MINOR.PATCH-alpha.PRERELEASE.

The version number is automatically generated by combining poetry-dynamic-versioning with the increment_package_version.py script and is based off git tagging and the incremental version numbering system mentioned above.

MLOps Files and Folders

These are the files and folders in the MLOps repo:

  • src contains the MLOps framework package
  • mlops_settings.yaml contains the user settings for the dummy model
  • model_code is a model template included to show the model interface. It is not needed by the framework, but it is recommended to become familiar with it.
  • model_artifact stores the artifacts for the model shown in model_code. This is to help to test the model and learn the framework.
  • .github/* describes all the relevant configurations for the CI/CD pipeline run by GitHub Actions
  • build.sh is the script to build and upload the package
  • pyproject.toml is the project's configuration file
  • LICENSE is the package's license

CDF Datasets

In order to control access to the artifacts:

  1. Set up a CDF Dataset with write_protected=True and a external_id, which by default is expected to be mlops.
  2. Create a group of owners (CDF Dashboard), i.e. those that should have write access

Local Testing (only implemented for the prediction service)

To perform local testing of before pushing to GITHUB, you can run the following commands:

poetry run python -m pytest

(assuming you have first run poetry install -E cdf --with=dev,pre-commit,version,test" in the same environment)

Build and Upload Package

Create an account in pypi, then create a token and a $HOME/.pypirc file if you want to deploy from local. Edit pyproject.toml file and note the following:

  • Dependencies need to be registered
  • Bash scripts will be installed in a bin folder in the PATH.

The pipeline is setup to build the library, but it's possible to build and upload the library from the development environment as well (as long as you have the PYPI_TOKEN environment variable set). To do so, run:

bash build.sh

In order to authenticate to GitHub to deploy to pypi you need to setup a token. Copy its content and add that to the secured repository secret PYPI_TOKEN.

Notes on the code

Service testing happens in an independent process (subprocess library) to avoid setup problems:

  • When deploying multiple models the service had to be reloaded before testing it, otherwise it would be the first model's service. Model initialization in the prediction service is designed to load artifacts only once in the process
  • If the model and the MLOps framework rely on different versions of the same library, the version would be changed during runtime, but the upgraded/downgraded version would not be available for the current process

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

akerbp_mlops-4.1.0a19.tar.gz (69.8 kB view details)

Uploaded Source

Built Distribution

akerbp_mlops-4.1.0a19-py3-none-any.whl (70.1 kB view details)

Uploaded Python 3

File details

Details for the file akerbp_mlops-4.1.0a19.tar.gz.

File metadata

  • Download URL: akerbp_mlops-4.1.0a19.tar.gz
  • Upload date:
  • Size: 69.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.1 CPython/3.11.8 Linux/6.5.0-1016-azure

File hashes

Hashes for akerbp_mlops-4.1.0a19.tar.gz
Algorithm Hash digest
SHA256 ffc2b4ee943dc2afba2490d99dd03394a6b8e7585a4d8a23e9805d9aa2277232
MD5 1d9b32a76a435919d4910afa3238c4df
BLAKE2b-256 3de9589c003792bc2239dc95eedd38052d77efba81d8031d33231837b695301e

See more details on using hashes here.

File details

Details for the file akerbp_mlops-4.1.0a19-py3-none-any.whl.

File metadata

  • Download URL: akerbp_mlops-4.1.0a19-py3-none-any.whl
  • Upload date:
  • Size: 70.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.1 CPython/3.11.8 Linux/6.5.0-1016-azure

File hashes

Hashes for akerbp_mlops-4.1.0a19-py3-none-any.whl
Algorithm Hash digest
SHA256 0e6d94b415fb3aa782332b92d4d5b974b0b3f791a76fbb066cbe107e28660989
MD5 52a00201b4b71475ee3a87657b62d3f4
BLAKE2b-256 a95a86cb12d6904c63c72f6a66ffd3d5f4b9426cfde47562c4c402cf9dda849d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page