Client library for Bedrock platform
Project description
Bedrock helps data scientists own the end-to-end deployment of machine learning workflows. bdrk
is the official client library for interacting with APIs on Bedrock platform.
Documentation
Full documentation and tutorials on Bedrock can be found here
Usage
In order to use bdrk
, you need to register an account with Basis AI. Please email contact@basis-ai.com
to get started. Once an account is created, you will be issued a personal API token that you can use to authenticate with Bedrock.
Installing Bedrock client
You can install Bedrock client library from PyPi with the following command. We recommend running it in a virtual environment to prevent potential dependency conflicts.
pip install bdrk
Note that the client library is officially supported for python 3.7 and above.
Installing optional dependencies
The following optional dependencies can be installed to enable additional featues.
Command line support:
pip install bdrk[cli]
Model monitoring support:
pip install bdrk[model-monitoring]
Setting up your environment
Once installed, you need to add a well formed bedrock.hcl
configuration file in your project's root directory. The configuration file specifies which script to run for training and deployment as well as their respective base Docker images. You can find an example directory layout here.
When using the module locally, you may need to define the following environment variables for bedrock_client
and lab runs to make API calls to Bedrock. These variables will be automatically set on your workload container when running in cluster.
export BEDROCK_API_DOMAIN=https://api.bdrk.ai export BEDROCK_API_TOKEN=<your personal API token>
bedrock_client library
The bedrock_client
library provides utility functions for your training runs.
Logging training metrics
You can easily export training metrics to Bedrock by adding logging code to train.py
. The example below demonstrates logging charts and metrics for visualisation on Bedrock platform.
import logging from bedrock_client.bedrock.api import BedrockApi logger = logging.getLogger(__name__) bedrock = BedrockApi(logger) bedrock.log_metric("Accuracy", 0.97) bedrock.log_chart_data([0, 1, 1], [0.1, 0.7, 0.9])
Logging feature and inference distribution
You may use the model monitoring service to save the distribution of input and model output data to a local file. The default path is /artefact/histogram.prom
so as to bundle the computed distribution together with the model artefact it trained from. When trained on Bedrock, the zipped /artefact
directory will be uploaded to user's blob storage bucket in the workload cluster.
import pandas as pd from bedrock_client.bedrock.metrics.service import ModelMonitoringService from sklearn.svm import SVC # User code to load training data features = pd.DataFrame({'a': [1, 2, 3], 'b': [3, 2, 1]}) model = SVC(probability=True) model.fit(features, [False, True, False]) inference = model.predict_proba(features)[:, 0] ModelMonitoringService.export_text( features=features.iteritems(), inference=inference.tolist(), )
Logging explainability and fairness metrics
Bedrock offers facility to generate and log explainability and fairness (XAFAI) metrics. bdrk
provides an easy to use API and native integration with Bedrock platform to visualize XAFAI metrics. All data is stored in your environment's blob storage to ensure nothing leaves your infrastructure. Under the hood it uses shap library to provide both global and individual explainability, and AI Fairness 360 toolkit to compare model behaviors between groups of interest for fairness assessment.
# As part of your train.py from bedrock_client.bedrock.analyzer.model_analyzer import ModelAnalyzer from bedrock_client.bedrock.analyzer import ModelTypes # Background data is used to simulate "missing" features to measure the # impact. # By default background data is limited to maximum of 5000 rows to # speed up analysis background = x_train # Tree model: xgboost, lightgbm. Other types of model e.g Tensorflow, Pytorch... are also # supported. analyzer = ModelAnalyzer( model, 'credit_risk_tree_model', model_type=ModelTypes.TREE, ).train_features(background).test_features(x_validation) # Metrics are calculated and uploaded to blob storage analyzer.analyze()
bdrk library
The bdrk
library provides APIs for interacting with the Bedrock platform.
from bdrk.v1 import ApiClient, Configuration, PipelineApi from bdrk.v1.models import ( PipelineResourcesSchema, TrainingPipelineRunSchema, ) configuration = Configuration() configuration.api_key["X-Bedrock-Access-Token"] = "MY-TOKEN" configuration.host = "https://api.bdrk.ai" api_client = ApiClient(configuration) pipeline_api = PipelineApi(api_client) pipeline = pipeline_api.get_training_pipeline_by_id(pipeline_id="MY-PIPELINE") run_schema = TrainingPipelineRunSchema( environment_public_id="MY-ENVIRONMENT", resources=PipelineResourcesSchema(cpu="500m", memory="200M"), script_parameters={"MYPARAM": "1.23"}, ) run = pipeline_api.run_training_pipeline( pipeline_id=pipeline.public_id, training_pipeline_run_schema=run_schema )
Lab run
The labrun
command can be used to launch test runs of local training code on the Bedrock platform.
# Set environment variables with credentials for this session $ unset HISTFILE # Don't save history for this session $ export BEDROCK_API_DOMAIN=https://api.bdrk.ai $ export BEDROCK_API_TOKEN=<your personal API token> $ bdrk labrun --help $ bdrk labrun --verbose --domain $BEDROCK_API_DOMAIN submit \ $HOME/basis/span-example-colourtest \ bedrock.hcl \ canary-dev \ -p ALPHA=0.9 \ -p L5_RATIO=0.1 \ -s DUMMY_SECRET_A=foo \ -s DUMMY_SECRET_B=bar $ bdrk labrun logs <run_id> <step_id> <run_token> $ bdrk labrun artefact <run_id> <run_token>
Monitoring models in production
At serving time, users may import bdrk[model-monitoring]
library to track various model performance metrics. Anomalies in these metrics can help inform users about model rot.
Logging predictions
The model monitoring service may be instantiated in serve.py to log every prediction request for offline analysis. The following example demonstrates how to enable prediction logging in a typical Flask app.
from bedrock_client.bedrock.metrics.service import ModelMonitoringService from flask import Flask, request from sklearn.svm import SVC # User code to load trained model model = SVC(probability=True) model.fit([[1, 3], [2, 2], [3, 1]], [False, True, False]) app = Flask(__name__) monitor = ModelMonitoringService() @app.route("/", methods=["POST"]) def predict(): # User code to load features features = [2.1, 1.8] score = model.predict_proba([features])[:, 0].item() monitor.log_prediction( request_body=request.json, features=features, output=score, ) return {"True": score}
The logged predictions are persisted in low cost blob store in the workload cluster with a maximum TTL of 1 month. The blob store is partitioned by the endpoint id and the event timestamp according to the following structure: models/predictions/{endpoint_id}/2020-01-22/1415_{logger_id}-{replica_id}.txt
.
- Endpoint id is the first portion of your domain name hosted on Bedrock
- Replica id is the name of your model server pod
- Logger id is a Bedrock generated name that's unique to the log collector pod
These properties are injected automatically into your model server container as environment variables.
To minimize latency of request handling, all predictions are logged asynchronously in a separate thread pool. We measured the overhead along critical path to be less than 1 ms per request.
Tracking feature and inference drift
If training distribution metrics are present in /artefact
directory, the model monitoring service will also track real time distribution of features and inference results. This is done using the same log_prediction
call so users don't need to further instrument their serving code.
In order to export the serving distribution metrics, users may add a new /metrics
endpoint to their Flask app. By default, all metrics are exported in Prometheus exposition format. The example code below shows how to extend the logging predictions example to support this use case.
@app.route("/metrics", methods=["GET"]) def get_metrics(): """Returns real time feature values recorded by prometheus """ body, content_type = monitor.export_http( params=request.args.to_dict(flat=False), headers=request.headers, ) return Response(body, content_type=content_type)
When deployed in your workload cluster, the /metrics
endpoint is automatically scraped by Prometheus every minute to store the latest metrics as timeseries data.
bdrk changelog
v0.6.0
bedrock_client.bedrock.analyzer.model_analyzer
- Breaking: model analyzer dependencies are now installed separately using
bdrk[xafai]
v0.5.0
bdrk.v1.api
- Breaking: pipeline name and config_file_path have been remove dfrom pipeline response objects
- Breaking: config_file_path is now nested under source field for model server deployment and pipeline run APIs
- Breaking: model name has been removed from model collection and artefact response objects
v0.4.2
bedrock_client.bedrock.analyzer.model_analyzer
- Doc: Add more detailed docstrings for ModelAnalyzer class
- Fix: Make ModelAnalyzer work offline when no Bedrock key is available
v0.4.1
bdrk.v1.ModelApi
- New: Added
get_model_version_details
to get model version information - new: Added
get_model_version_download_url
to get model version download url
v0.4.0
bedrock_client.bedrock.analyzer.model_analyzer
- Improve: Make XAI metric capturing (via SHAP) opt-in. By default only
log_model_info
v0.3.2 (2020-08-25)
bedrock_client.bedrock.metrics
- Fix: inference metric collector exporting duplicate categorical data
- Improve: ignore unsupported value types when tracking metrics
v0.3.1 (2020-08-12)
bedrock_client.bedrock.analyzer.model_analyzer
- New: Added
ModelAnalyzer._log_model_info
to capture some information of the generated model
v0.3.0 (2020-07-28)
bedrock_client.bedrock.analyzer.model_analyzer
- New: Added
ModelAnalyzer
to generate and store xafai metrics
bedrock_client.bedrock.metrics
- New:
ModelMonitoringService.export_http
now exports metadata about the baseline metrics (feature name, metric name, and metric type)
v0.2.2 (2020-06-05)
bedrock_client.bedrock.metrics
- New: return prediction id from
log_prediction
call - Improve: error handling on missing baseline metrics
v0.2.1 (2020-05-06)
bedrock_client.bedrock.metrics
- Fix: handle nans when classifying features as discrete variable
- New: allow
str
type as prediction output for multi-class classification
v0.2.0 (2020-04-09)
bedrock_client.bedrock
- Deprecated
bdrk[prediction-store]
component - Added
bdrk[model-monitoring]
component to capture both logging and metrics
bedrock_client.bedrock.metrics
- Added
ModelMonitoringService.export_text
method for computing feature metrics on training data and exporting to a text file - Added
ModelMonitoringService
class for initialising model server metrics based on baseline metrics exported from training - Added
ModelMonitoringService.export_http
method for exposing current metrics in Prometheus registry to the scraper
v0.1.6 (2020-01-29)
bedrock_client.bedrock
- Added
bdrk[prediction-store]
optional component for logging of predictions at serving time
bdrk.v1
- Removed
TrainingPipelineSchema.pipeline_source
. - Fixed type of
ModelArtefactSchema.environment_id
from object to string. - Removed unused schemas.
- Added
entity_number
toModelArtefactSchema
,TrainingPipelineRunSchema
andBatchScoringPipelineRunSchema
. - Added
pipeline_name
toTrainingPipelineRunSchema
andBatchScoringPipelineRunSchema
.
v0.1.5 (2019-11-13)
bdrk.v1
- Added
kwargs
to models to allow backward compatibility. - Changed response schema for
ModelApi.get_artefact_details
fromModelArtefactDetails
toModelArtefactSchema
. - Removed unused schemas.
v0.1.4 (2019-10-09)
bdrk.v1
- Added
get_training_pipeline_runs
function to retrieve all runs from a pipeline
v0.1.3 (2019-10-01)
bdrk.v1
- Added
bdrk.v1.ModelApi
with functionget_artefact_details
.from bdrk.v1 import ModelApi model_api = ModelApi(api_client) artefact = model_api.get_artefact_details(public_id=pipeline.model_id, artefact_id=run.artefact_id)
bdrk.v1.models.UserSchema.email_address
made required.
bdrk.v1_utils
-
Added utility functions for downloading and unzipping artefacts
from bdrk.v1 import ApiClient, Configuration from bdrk.v1_util import download_and_unzip_artefact configuration = Configuration() configuration.api_key["X-Bedrock-Access-Token"] = "YOUR-TOKEN-HERE" configuration.host = "https://api.bdrk.ai" api_client = ApiClient(configuration) # There are other utility methods as well # `get_artefact_stream`, `download_stream`, `unzip_file_to_dir` download_and_unzip_artefact( api_client=api_client, model_id="model-repository-id", model_artefact_id="model-version-id", output_dir="/tmp/artefact", )
bedrock_client.bedrock.labrun
- Changed command from
bedrock labrun
tobdrk labrun submit
. - Added secrets using
-s DUMMY_SECRET_A=foo
flag. - Added downloading of logs and model artefacts.
bdrk labrun logs <run_id> <run_token>
bdrk labrun artefact <run_id> <run_token>
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Filename, size | File type | Python version | Upload date | Hashes |
---|---|---|---|---|
Filename, size bdrk-0.6.0-py3-none-any.whl (140.0 kB) | File type Wheel | Python version py3 | Upload date | Hashes View |
Filename, size bdrk-0.6.0.tar.gz (85.8 kB) | File type Source | Python version None | Upload date | Hashes View |