Machine Learning Operations Toolkit
Project description
⏳ Tempo: The MLOps Software Development Kit
Vision
Enable data scientists to see a productionised machine learning model within moments, not months. Easy to work with locally and also in kubernetes, whatever your preferred data science tools
Highlights
Tempo provides a unified interface to multiple MLOps projects that enable data scientists to deploy and productionise machine learning systems.
- Package your trained model artifacts to optimized server runtimes (Tensorflow, PyTorch, Sklearn, XGBoost etc)
- Package custom business logic to production servers.
- Build an inference pipeline of models and orchestration steps.
- Include any custom python components as needed. Examples:
- Outlier detectors with Alibi-Detect.
- Explainers with Alibi-Explain.
- Test Locally - Deploy to Production
- Run with local unit tests.
- Deploy locally to Docker to test with Docker runtimes.
- Deploy to production on Kubernetes
- Extract declarative Kubernetes yaml to follow GitOps workflows.
- Supporting a wide range of production runtimes
- Seldon Core open source
- KFServing open source
- Seldon Deploy enterprise
- Create stateful services. Examples:
- Multi-Armed Bandits.
Workflow
- Develop locally.
- Test locally on Docker with production artifacts.
- Push artifacts to remote bucket store and launch remotely (on Kubernetes).
Motivating Synopsis
Data scientists can easily test their models and orchestrate them with pipelines.
Below we see two Model
s (sklearn and xgboost) with a function decorated pipeline
to call both.
sklearn_model = Model(
name="test-iris-sklearn",
platform=ModelFramework.SKLearn,
protocol=SeldonProtocol(),
local_folder=SKLEARN_FOLDER,
uri="s3://tempo/basic/sklearn"
)
xgboost_model = Model(
name="test-iris-xgboost",
platform=ModelFramework.XGBoost,
protocol=SeldonProtocol(),
local_folder=XGBOOST_FOLDER,
uri="s3://tempo/basic/xgboost"
)
@pipeline(name="classifier",
uri="s3://tempo/basic/pipeline",
local_folder=PIPELINE_ARTIFACTS_FOLDER,
models=[sklearn_model, xgboost_model])
def classifier(payload: np.ndarray) -> Tuple[np.ndarray,str]:
res1 = sklearn_model(payload)
if res1[0][0] > 0.5:
return res1,"sklearn prediction"
else:
return xgboost_model(payload),"xgboost prediction"
Save the pipeline code.
save(classifier, save_env=True)
Deploy to docker.
docker_runtime = SeldonDockerRuntime()
docker_runtime.deploy(classifier)
docker_runtime.wait_ready(classifier)
Make predictions on containerized servers that would be used in production.
classifier.remote(payload=np.array([[1, 2, 3, 4]]))
Deploy to Kubernetes for production.
k8s_runtime = SeldonKubernetesRuntime()
k8s_runtime.deploy(classifier)
k8s_runtime.wait_ready(classifier)
This is an extract from the two intridyctory examples for local and Kubernetes demos.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for mlops_tempo-0.1.0.dev8-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5881a71461bee43c216898137bb46b1c2dda2910cc23533e28111a8941fdba25 |
|
MD5 | c096da23875ab121253974f0cd79322a |
|
BLAKE2b-256 | 7c4d62336e4210cd88b64c4a32450d3a385090c5387595c83138f0d85b485123 |