Skip to main content

ZenML: Write production-ready ML code.

Project description

Create an internal MLOps platform for your entire machine learning team.


🤸 Quickstart

Open In Colab

Install ZenML via PyPI. Python 3.8 - 3.11 is required:

pip install "zenml[server]" notebook

Take a tour with the guided quickstart by running:

zenml go

🪄 Simple, integrated, End-to-end MLOps

Create machine learning pipelines with minimal code changes

ZenML is a MLOps framework intended for data scientists or ML engineers looking to standardize machine learning practices. Just add @step and @pipeline to your existing Python functions to get going. Here is a toy example:

from zenml import pipeline, step

@step  # Just add this decorator
def load_data() -> dict:
    training_data = [[1, 2], [3, 4], [5, 6]]
    labels = [0, 1, 0]
    return {'features': training_data, 'labels': labels}

@step
def train_model(data: dict) -> None:
    total_features = sum(map(sum, data['features']))
    total_labels = sum(data['labels'])
    
    print(f"Trained model using {len(data['features'])} data points. "
          f"Feature sum is {total_features}, label sum is {total_labels}")

@pipeline  # This function combines steps together 
def simple_ml_pipeline():
    dataset = load_data()
    train_model(dataset)

if __name__ == "__main__":
    run = simple_ml_pipeline()  # call this to run the pipeline
   

Running a ZenML pipeline

Deploy workloads easily on your production infrastructure

The framework is a gentle entry point for practitioners to build complex ML pipelines with little knowledge required of the underlying infrastructure complexity. ZenML pipelines can be run on AWS, GCP, Azure, Airflow, Kubeflow and even on Kubernetes without having to change any code or know underlying internals.

from zenml.config import ResourceSettings, DockerSettings

@step(
  settings={
    "resources": ResourceSettings(memory="16GB", gpu_count="1", cpu_count="8"),
    "docker": DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime")
  }
)
def training(...):
	...
zenml stack set k8s  # Set a stack with kubernetes orchestrator
python run.py

Workloads with ZenML

Track models, pipeline, and artifacts

Create a complete lineage of who, where, and what data and models are produced.

You’ll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.

from zenml import Model

@step(model=Model(name="classification"))
def trainer(training_df: pd.DataFrame) -> Annotated["model", torch.nn.Module]:
	...

Exploring ZenML Models

Purpose built for machine learning with integration to you favorite tools

While ZenML brings a lot of value of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in.

from bentoml._internal.bento import bento

@step(on_failure=alert_slack, experiment_tracker="mlflow")
def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento
	mlflow.autolog()
	...
	return bento

Exploring ZenML Integrations

🖼️ Learning

The best way to learn about ZenML is the docs. We recommend beginning with the Starter Guide to get up and running quickly.

If you are a visual learner, this 11-minute video tutorial is also a great start:

Introductory Youtube Video

And finally, here are some other examples and use cases for inspiration:

  1. E2E Batch Inference: Feature engineering, training, and inference pipelines for tabular machine learning.
  2. Basic NLP with BERT: Feature engineering, training, and inference focused on NLP.
  3. LLM RAG Pipeline with Langchain and OpenAI: Using Langchain to create a simple RAG pipeline.
  4. Huggingface Model to Sagemaker Endpoint: Automated MLOps on Amazon Sagemaker and HuggingFace
  5. LLMops: Complete guide to do LLM with ZenML

🔋 Deploy ZenML

For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams.

Currently, there are two main ways to deploy ZenML:

  • ZenML Cloud: With ZenML Cloud, you can make use of a control plane to create ZenML servers, also known as tenants. These tenants are managed and maintained by ZenML’s dedicated team, alleviating the burden of server management from your end.
  • Self-hosted deployment: Alternatively, you have the flexibility to deploy ZenML on your own self-hosted environment. This can be achieved through various methods, including using our CLI, Docker, Helm, or HuggingFace Spaces.

Use ZenML with VS Code

ZenML has a VS Code extension that allows you to inspect your stacks and pipeline runs directly from your editor. The extension also allows you to switch your stacks without needing to type any CLI commands.

🖥️ VS Code Extension in Action!
ZenML Extension

🗺 Roadmap

ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.

ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:

🙌 Contributing and Community

We would love to develop ZenML together with our community! The best way to get started is to select any issue from the [good-first-issue label](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22) and open up a Pull Request!

If you would like to contribute, please review our Contributing Guide for all relevant details.

🆘 Getting Help

The first point of call should be our Slack group. Ask your questions about bugs or specific use cases, and someone from the core team will respond. Or, if you prefer, open an issue on our GitHub repo.

📜 License

ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Join our Slack Slack Community and be part of the ZenML family.

Features · Roadmap · Report Bug · Sign up for Cloud · Read Blog · Contribute to Open Source · Projects Showcase

🎉 Version 0.58.2 is out. Check out the release notes here.
🖥️ Download our VS Code Extension here.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zenml_nightly-0.58.2.dev20240612.tar.gz (7.0 MB view details)

Uploaded Source

Built Distribution

File details

Details for the file zenml_nightly-0.58.2.dev20240612.tar.gz.

File metadata

  • Download URL: zenml_nightly-0.58.2.dev20240612.tar.gz
  • Upload date:
  • Size: 7.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/1.8.3 CPython/3.8.18 Linux/6.5.0-1021-azure

File hashes

Hashes for zenml_nightly-0.58.2.dev20240612.tar.gz
Algorithm Hash digest
SHA256 acab646ca390b2c1515503f295a1c048b7455ecf68d368308f01dff5b080d83c
MD5 56b8a9356ce860f5b40c6f7fcf06f54b
BLAKE2b-256 f609a313a1a716f10868e0cdf36fb6f31fffcbafb86ba8b7b99f2a2821c4ef70

See more details on using hashes here.

File details

Details for the file zenml_nightly-0.58.2.dev20240612-py3-none-any.whl.

File metadata

File hashes

Hashes for zenml_nightly-0.58.2.dev20240612-py3-none-any.whl
Algorithm Hash digest
SHA256 7240b5e970deefd47c51c3586a9101a04db0323e5a34318756277fb9cc548869
MD5 263f0692198474990a7ca8e2961ba20b
BLAKE2b-256 2d973c0b9afba15c4f92b16d5c509a5d441722327e0bc90f0dd0a66f080615b5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page