Skip to main content

ZenML: Write production-ready ML code.

Project description

Create an internal MLOps platform for your entire machine learning team.


🤸 Quickstart

Open In Colab

Install ZenML via PyPI. Python 3.8 - 3.11 is required:

pip install "zenml[server]" notebook

Take a tour with the guided quickstart by running:

zenml go

🪄 Simple, integrated, End-to-end MLOps

Create machine learning pipelines with minimal code changes

ZenML is a MLOps framework intended for data scientists or ML engineers looking to standardize machine learning practices. Just add @step and @pipeline to your existing Python functions to get going. Here is a toy example:

from zenml import pipeline, step

@step  # Just add this decorator
def load_data() -> dict:
    training_data = [[1, 2], [3, 4], [5, 6]]
    labels = [0, 1, 0]
    return {'features': training_data, 'labels': labels}

@step
def train_model(data: dict) -> None:
    total_features = sum(map(sum, data['features']))
    total_labels = sum(data['labels'])
    
    print(f"Trained model using {len(data['features'])} data points. "
          f"Feature sum is {total_features}, label sum is {total_labels}")

@pipeline  # This function combines steps together 
def simple_ml_pipeline():
    dataset = load_data()
    train_model(dataset)

if __name__ == "__main__":
    run = simple_ml_pipeline()  # call this to run the pipeline
   

Running a ZenML pipeline

Deploy workloads easily on your production infrastructure

The framework is a gentle entry point for practitioners to build complex ML pipelines with little knowledge required of the underlying infrastructure complexity. ZenML pipelines can be run on AWS, GCP, Azure, Airflow, Kubeflow and even on Kubernetes without having to change any code or know underlying internals.

from zenml.config import ResourceSettings, DockerSettings

@step(
  settings={
    "resources": ResourceSettings(memory="16GB", gpu_count="1", cpu_count="8"),
    "docker": DockerSettings(parent_image="pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime")
  }
)
def training(...):
	...
zenml stack set k8s  # Set a stack with kubernetes orchestrator
python run.py

Workloads with ZenML

Track models, pipeline, and artifacts

Create a complete lineage of who, where, and what data and models are produced.

You’ll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.

from zenml import Model

@step(model=Model(name="classification"))
def trainer(training_df: pd.DataFrame) -> Annotated["model", torch.nn.Module]:
	...

Exploring ZenML Models

Purpose built for machine learning with integration to you favorite tools

While ZenML brings a lot of value of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in.

from bentoml._internal.bento import bento

@step(on_failure=alert_slack, experiment_tracker="mlflow")
def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento
	mlflow.autolog()
	...
	return bento

Exploring ZenML Integrations

🖼️ Learning

The best way to learn about ZenML is the docs. We recommend beginning with the Starter Guide to get up and running quickly.

If you are a visual learner, this 11-minute video tutorial is also a great start:

Introductory Youtube Video

And finally, here are some other examples and use cases for inspiration:

  1. E2E Batch Inference: Feature engineering, training, and inference pipelines for tabular machine learning.
  2. Basic NLP with BERT: Feature engineering, training, and inference focused on NLP.
  3. LLM RAG Pipeline with Langchain and OpenAI: Using Langchain to create a simple RAG pipeline.
  4. Huggingface Model to Sagemaker Endpoint: Automated MLOps on Amazon Sagemaker and HuggingFace
  5. LLMops: Complete guide to do LLM with ZenML

🔋 Deploy ZenML

For full functionality ZenML should be deployed on the cloud to enable collaborative features as the central MLOps interface for teams.

Currently, there are two main ways to deploy ZenML:

  • ZenML Cloud: With ZenML Cloud, you can make use of a control plane to create ZenML servers, also known as tenants. These tenants are managed and maintained by ZenML’s dedicated team, alleviating the burden of server management from your end.
  • Self-hosted deployment: Alternatively, you have the flexibility to deploy ZenML on your own self-hosted environment. This can be achieved through various methods, including using our CLI, Docker, Helm, or HuggingFace Spaces.

Use ZenML with VS Code

ZenML has a VS Code extension that allows you to inspect your stacks and pipeline runs directly from your editor. The extension also allows you to switch your stacks without needing to type any CLI commands.

🖥️ VS Code Extension in Action!
ZenML Extension

🗺 Roadmap

ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.

ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:

🙌 Contributing and Community

We would love to develop ZenML together with our community! The best way to get started is to select any issue from the [good-first-issue label](https://github.com/issues?q=is%3Aopen+is%3Aissue+archived%3Afalse+user%3Azenml-io+label%3A%22good+first+issue%22) and open up a Pull Request!

If you would like to contribute, please review our Contributing Guide for all relevant details.

🆘 Getting Help

The first point of call should be our Slack group. Ask your questions about bugs or specific use cases, and someone from the core team will respond. Or, if you prefer, open an issue on our GitHub repo.

📜 License

ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Join our Slack Slack Community and be part of the ZenML family.

Features · Roadmap · Report Bug · Sign up for Cloud · Read Blog · Contribute to Open Source · Projects Showcase

🎉 Version 0.58.1 is out. Check out the release notes here.
🖥️ Download our VS Code Extension here.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zenml_nightly-0.58.1.dev20240607.tar.gz (6.9 MB view details)

Uploaded Source

Built Distribution

File details

Details for the file zenml_nightly-0.58.1.dev20240607.tar.gz.

File metadata

  • Download URL: zenml_nightly-0.58.1.dev20240607.tar.gz
  • Upload date:
  • Size: 6.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/1.8.3 CPython/3.8.18 Linux/6.5.0-1021-azure

File hashes

Hashes for zenml_nightly-0.58.1.dev20240607.tar.gz
Algorithm Hash digest
SHA256 bc5dec0ea86e50b1d287152e9a7e0626646e74cebbded9b6bcf917ae37c68ba4
MD5 4b7ad8100e334691541a989baacdfb23
BLAKE2b-256 b62a86e8a91b0415c8f342f58700ea527242ff6623eaba30c323a1f2a18d4a9c

See more details on using hashes here.

File details

Details for the file zenml_nightly-0.58.1.dev20240607-py3-none-any.whl.

File metadata

File hashes

Hashes for zenml_nightly-0.58.1.dev20240607-py3-none-any.whl
Algorithm Hash digest
SHA256 668970feb6d125ac1669f0a49531e0850b4cccefefd3f23b7ef756823e9793d6
MD5 de429bd87934c7ff90a707aff3eea70d
BLAKE2b-256 91c7386ae92e09961af03300ae63f39b76927aa655a030e80b8cfa86cd146b4e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page