Client library for Bedrock platform
Project description
Bedrock helps data scientists own the end-to-end deployment of machine learning workflows. bdrk
is the official client library for interacting with APIs on Bedrock platform.
Documentation
Full documentation and tutorials on Bedrock can be found here
Usage
In order to use bdrk
, you need to register an account with Basis AI. Please email contact@basis-ai.com
to get started. Once an account is created, you will be issued a personal API token that you can use to authenticate with Bedrock.
Installing Bedrock client
You can install Bedrock client library from PyPi with the following command. We recommend running it in a virtual environment to prevent potential dependency conflicts.
pip install bdrk
Note that the client library is officially supported for python 3.7 and above.
Installing optional dependencies
The following optional dependencies can be installed to enable additional featues.
Command line support:
pip install bdrk[cli]
Model monitoring support:
pip install boxkite
Setting up your environment
Once installed, you need to add a well formed bedrock.hcl
configuration file in your project's root directory. The configuration file specifies which script to run for training and deployment as well as their respective base Docker images. You can find an example directory layout here.
When using the module locally, you may need to define the following environment variables for bdrk
to make API calls to Bedrock. These variables will be automatically set on your workload container when running in cluster.
export BEDROCK_API_DOMAIN=https://api.bdrk.ai export BEDROCK_API_TOKEN=<your personal API token>
Training on Bedrock
The bdrk
library provides utility functions for your training runs.
Logging training metrics
You can easily export training metrics to Bedrock by adding logging code to train.py
. The example below demonstrates logging charts and metrics for visualisation on Bedrock platform.
import bdrk bdrk.init() with bdrk.start_run(): bdrk.log_metric("Accuracy", 0.97) bdrk.log_binary_classifier_metrics([0, 1, 1], [0.1, 0.7, 0.9])
Logging feature and inference distribution
You may use the model monitoring service to save the distribution of input and model output data to a local file. The default path is /artefact/histogram.prom
so as to bundle the computed distribution together with the model artefact it trained from. When trained on Bedrock, the zipped /artefact
directory will be uploaded to user's blob storage bucket in the workload cluster.
import pandas as pd from boxkite.monitoring.service import ModelMonitoringService from sklearn.svm import SVC # User code to load training data features = pd.DataFrame({'a': [1, 2, 3], 'b': [3, 2, 1]}) model = SVC(probability=True) model.fit(features, [False, True, False]) inference = model.predict_proba(features)[:, 0] ModelMonitoringService.export_text( features=features.iteritems(), inference=inference.tolist(), )
Logging explainability and fairness metrics
Bedrock offers facility to generate and log explainability and fairness (XAFAI) metrics. bdrk
provides an easy to use API and native integration with Bedrock platform to visualize XAFAI metrics. All data is stored in your environment's blob storage to ensure nothing leaves your infrastructure. Under the hood it uses shap library to provide both global and individual explainability, and AI Fairness 360 toolkit to compare model behaviors between groups of interest for fairness assessment.
# As part of your train.py from bdrk.model_analyzer import ModelAnalyzer, ModelTypes # Background data is used to simulate "missing" features to measure the impact. # It is limited to maximum of 5000 rows by default to speed up analysis. background = x_train # Tree model: xgboost, lightgbm. Other types of model, including Tensorflow # and Pytorch are also supported. analyzer = ModelAnalyzer( model, 'credit_risk_tree_model', model_type=ModelTypes.TREE, ).train_features(background).test_features(x_validation) # Metrics are calculated and uploaded to blob storage analyzer.analyze()
Training locally
The bdrk
library supports tracking runs outside of Bedrock platform.
import bdrk bdrk.init(project_id="test-project") with bdrk.start_run(pipeline_id="local-pipeline", environment_id="canary-dev"): bdrk.download_model(model_id="colour", version=1) # ... # Training code goes here # ... bdrk.log_params({"alpha": 0.5, "l5_ratio": 0.5}) bdrk.log_metrics(metrics={"metric_1": 0.5, "metric_2": 0.8}) bdrk.log_binary_classifier_metrics(actual=[0, 1, 1], probability=[0, 0.1, 0.5]) bdrk.log_model("./train.zip")
Under the hood, bdrk.backend
module exposes APIs for interacting with Bedrock platform.
from bdrk.backend.v1 import ApiClient, Configuration, PipelineApi from bdrk.backend.v1.models import ( PipelineResourcesSchema, TrainingPipelineRunSchema, ) configuration = Configuration() configuration.api_key["X-Bedrock-Access-Token"] = "MY-TOKEN" configuration.host = "https://api.bdrk.ai" api_client = ApiClient(configuration) pipeline_api = PipelineApi(api_client) pipeline = pipeline_api.get_training_pipeline_by_id(pipeline_id="MY-PIPELINE") run_schema = TrainingPipelineRunSchema( environment_public_id="MY-ENVIRONMENT", resources=PipelineResourcesSchema(cpu="500m", memory="200M"), script_parameters={"MYPARAM": "1.23"}, ) run = pipeline_api.run_training_pipeline( pipeline_id=pipeline.public_id, training_pipeline_run_schema=run_schema )
Monitoring models in production
At serving time, users may import boxkite
library to track various model performance metrics. Anomalies in these metrics can help inform users about model rot.
Logging predictions
The model monitoring service may be instantiated in serve.py to log every prediction request for offline analysis. The following example demonstrates how to enable prediction logging in a typical Flask app.
from boxkite.monitoring.service import ModelMonitoringService from flask import Flask, request from sklearn.svm import SVC # User code to load trained model model = SVC(probability=True) model.fit([[1, 3], [2, 2], [3, 1]], [False, True, False]) app = Flask(__name__) monitor = ModelMonitoringService() @app.route("/", methods=["POST"]) def predict(): # User code to load features features = [2.1, 1.8] score = model.predict_proba([features])[:, 0].item() monitor.log_prediction( request_body=request.json, features=features, output=score, ) return {"True": score}
The logged predictions are persisted in low cost blob store in the workload cluster with a maximum TTL of 1 month. The blob store is partitioned by the endpoint id and the event timestamp according to the following structure: models/predictions/{endpoint_id}/2020-01-22/1415_{logger_id}-{replica_id}.txt
.
- Endpoint id is the first portion of your domain name hosted on Bedrock
- Replica id is the name of your model server pod
- Logger id is a Bedrock generated name that's unique to the log collector pod
These properties are injected automatically into your model server container as environment variables.
To minimize latency of request handling, all predictions are logged asynchronously in a separate thread pool. We measured the overhead along critical path to be less than 1 ms per request.
Tracking feature and inference drift
If training distribution metrics are present in /artefact
directory, the model monitoring service will also track real time distribution of features and inference results. This is done using the same log_prediction
call so users don't need to further instrument their serving code.
In order to export the serving distribution metrics, users may add a new /metrics
endpoint to their Flask app. By default, all metrics are exported in Prometheus exposition format. The example code below shows how to extend the logging predictions example to support this use case.
@app.route("/metrics", methods=["GET"]) def get_metrics(): """Returns real time feature values recorded by prometheus """ body, content_type = monitor.export_http( params=request.args.to_dict(flat=False), headers=request.headers, ) return Response(body, content_type=content_type)
When deployed in your workload cluster, the /metrics
endpoint is automatically scraped by Prometheus every minute to store the latest metrics as timeseries data.
Changelog
v0.9.2 (2021-10-18)
- Fix: Resolve type mismatch when matching fairness config and input data. It is now required for both to have the same types.
- New: Add HTTP proxy support to ModelAnalyzer
v0.9.1 (2021-07-21)
We will remove bdrk.bedrock_client
module in the next major release. Please refer to the README above and migrate your code to use bdrk.init
instead. The new tracking client supports both local and orchestrated run contexts so your experimentation code can be ported over seamlessly to Bedrock.
We will also remove bdrk[model-monitoring]
extra requirements in the next major release. All existing functionalities will be provided via boxkite.
- New: Support logging metrics from orchestrated runs
v0.9.0 (2021-07-07)
- Breaking: replace model-monitoring with boxkite
v0.8.3 (2021-06-22)
- Fix: Add model_type (classification or regression) in shap_samples api
v0.8.2 (2021-05-19)
- Fix: log_model produces an empty file when passed a relative directory
v0.8.1 (2021-05-19)
- New: support api request over HTTP proxy
v0.8.0 (2021-04-27)
- New: Support discrete inference histogram
- New: Support regression inference histogram
- New: Support regression fairness metrics logging
- Breaking: model confidence histogram for binary classification is no longer fixed to 50 bins
v0.7.2 (2021-03-24)
- Support fairness metrics by attributes.
v0.7.1 (2021-03-10)
bdrk.model_analyzer
- Support custom fairness metrics logging.
v0.7.0 (2021-03-09)
- Support starting a training run locally.
- Breaking: deprecated labrun
v0.6.0
bedrock_client.bedrock.analyzer.model_analyzer
- Breaking: model analyzer dependencies are now installed separately using
bdrk[xafai]
v0.5.0
bdrk.v1.api
- Breaking: pipeline name and config_file_path have been remove dfrom pipeline response objects
- Breaking: config_file_path is now nested under source field for model server deployment and pipeline run APIs
- Breaking: model name has been removed from model collection and artefact response objects
v0.4.2
bedrock_client.bedrock.analyzer.model_analyzer
- Doc: Add more detailed docstrings for ModelAnalyzer class
- Fix: Make ModelAnalyzer work offline when no Bedrock key is available
v0.4.1
bdrk.v1.ModelApi
- New: Added
get_model_version_details
to get model version information - new: Added
get_model_version_download_url
to get model version download url
v0.4.0
bedrock_client.bedrock.analyzer.model_analyzer
- Improve: Make XAI metric capturing (via SHAP) opt-in. By default only
log_model_info
v0.3.2 (2020-08-25)
bedrock_client.bedrock.metrics
- Fix: inference metric collector exporting duplicate categorical data
- Improve: ignore unsupported value types when tracking metrics
v0.3.1 (2020-08-12)
bedrock_client.bedrock.analyzer.model_analyzer
- New: Added
ModelAnalyzer._log_model_info
to capture some information of the generated model
v0.3.0 (2020-07-28)
bedrock_client.bedrock.analyzer.model_analyzer
- New: Added
ModelAnalyzer
to generate and store xafai metrics
bedrock_client.bedrock.metrics
- New:
ModelMonitoringService.export_http
now exports metadata about the baseline metrics (feature name, metric name, and metric type)
v0.2.2 (2020-06-05)
bedrock_client.bedrock.metrics
- New: return prediction id from
log_prediction
call - Improve: error handling on missing baseline metrics
v0.2.1 (2020-05-06)
bedrock_client.bedrock.metrics
- Fix: handle nans when classifying features as discrete variable
- New: allow
str
type as prediction output for multi-class classification
v0.2.0 (2020-04-09)
bedrock_client.bedrock
- Deprecated
bdrk[prediction-store]
component - Added
bdrk[model-monitoring]
component to capture both logging and metrics
bedrock_client.bedrock.metrics
- Added
ModelMonitoringService.export_text
method for computing feature metrics on training data and exporting to a text file - Added
ModelMonitoringService
class for initialising model server metrics based on baseline metrics exported from training - Added
ModelMonitoringService.export_http
method for exposing current metrics in Prometheus registry to the scraper
v0.1.6 (2020-01-29)
bedrock_client.bedrock
- Added
bdrk[prediction-store]
optional component for logging of predictions at serving time
bdrk.v1
- Removed
TrainingPipelineSchema.pipeline_source
. - Fixed type of
ModelArtefactSchema.environment_id
from object to string. - Removed unused schemas.
- Added
entity_number
toModelArtefactSchema
,TrainingPipelineRunSchema
andBatchScoringPipelineRunSchema
. - Added
pipeline_name
toTrainingPipelineRunSchema
andBatchScoringPipelineRunSchema
.
v0.1.5 (2019-11-13)
bdrk.v1
- Added
kwargs
to models to allow backward compatibility. - Changed response schema for
ModelApi.get_artefact_details
fromModelArtefactDetails
toModelArtefactSchema
. - Removed unused schemas.
v0.1.4 (2019-10-09)
bdrk.v1
- Added
get_training_pipeline_runs
function to retrieve all runs from a pipeline
v0.1.3 (2019-10-01)
bdrk.v1
- Added
bdrk.v1.ModelApi
with functionget_artefact_details
.from bdrk.v1 import ModelApi model_api = ModelApi(api_client) artefact = model_api.get_artefact_details(public_id=pipeline.model_id, artefact_id=run.artefact_id)
bdrk.v1.models.UserSchema.email_address
made required.
bdrk.v1_utils
-
Added utility functions for downloading and unzipping artefacts
from bdrk.v1 import ApiClient, Configuration from bdrk.v1_util import download_and_unzip_artefact configuration = Configuration() configuration.api_key["X-Bedrock-Access-Token"] = "YOUR-TOKEN-HERE" configuration.host = "https://api.bdrk.ai" api_client = ApiClient(configuration) # There are other utility methods as well # `get_artefact_stream`, `download_stream`, `unzip_file_to_dir` download_and_unzip_artefact( api_client=api_client, model_id="model-repository-id", model_artefact_id="model-version-id", output_dir="/tmp/artefact", )
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.