Skip to main content

ZenML: Write production-ready ML code.

Project description

👀 What is ZenML?

ZenML is an extensible, open-source MLOps framework to create production-ready machine learning pipelines. Built for data scientists, it has a simple, flexible syntax, is cloud- and tool-agnostic, and has interfaces/abstractions that are catered towards ML workflows.

At its core, ZenML pipelines execute ML-specific workflows from sourcing data to splitting, preprocessing, training, all the way to the evaluation of results and even serving. There are many built-in batteries to support common ML development tasks. ZenML is not here to replace the great tools that solve these individual problems. Rather, it integrates natively with popular ML tooling and gives standard abstraction to write your workflows.

🎉 Version 0.7.3 out now! Check out the release notes here.

PyPI - Python Version PyPI Status GitHub Codecov Interrogate Main Workflow Tests

Join our Slack Slack Community and become part of the ZenML family
Give us a Slack GitHub star to show your love
NEW: Vote Vote on the next ZenML features

Before and after ZenML

🤖 Why use ZenML?

ZenML pipelines are designed to be written early on the development lifecycle. Data scientists can explore their pipelines as they develop towards production, switching stacks from local to cloud deployments with ease. You can read more about why we started building ZenML on our blog. By using ZenML in the early stages of your project, you get the following benefits:

  • Reproducibility of training and inference workflows
  • A simple and clear way to represent the steps of your pipeline in code
  • Plug-and-play integrations: bring all your favorite tools together
  • Easy switching between local and cloud stacks
  • Painless deployment and configuration of infrastructure
  • Scale up your stack transparently and logically to suit your training and deployment needs

📖 Learn More

ZenML Resources Description
🧘‍♀️ ZenML 101 New to ZenML? Here's everything you need to know!
⚛️ Core Concepts Some key terms and concepts we use.
🗃 Functional API Guide Build production ML pipelines with simple functions.
🚀 [New in v0.7.3] New features, bug fixes.
🗳 Vote for Features Pick what we work on next!
📓 Docs Full documentation for creating your own ZenML pipelines.
📒 API Reference The detailed reference for ZenML's API.
🗂️️ ZenFiles End-to-end projects using ZenML.
⚽️ Examples Learn best through examples where ZenML is used? We've got you covered.
📬 Blog Use cases of ZenML and technical deep dives on how we built it.
🔈 Podcast Conversations with leaders in ML, released every 2 weeks.
📣 Newsletter We build ZenML in public. Subscribe to learn how we work.
💬 Join Slack Need help with your specific use case? Say hi on Slack!
🗺 Roadmap See where ZenML is working to build new features.
🙋‍♀️ Contribute How to contribute to the ZenML project and code base.

🎮 Features

1. 🗃 Use Caching across (Pipelines As) Experiments

ZenML makes sure for every pipeline you can trust that:

  • Code is versioned
  • Data is versioned
  • Models are versioned
  • Configurations are versioned

You can utilize caching to help iterate quickly through ML experiments. (Read our blogpost to learn more!)

2. ♻️ Leverage Powerful Integrations

Once code is organized into a ZenML pipeline, you can supercharge your ML development with powerful integrations on multiple MLOps stacks. There are lots of moving parts for all the MLOps tooling and infrastructure you require for ML in production and ZenML aims to bring it all together under one roof.

We currently support Airflow and Kubeflow as third-party orchestrators for your ML pipeline code. ZenML steps can be built from any of the other tools you usually use in your ML workflows, from scikit-learn to PyTorch or TensorFlow.

ZenML is the glue

3. ☁️ Scale to the Cloud

Switching from local experiments to cloud-based pipelines doesn't need to be complicated. ZenML supports running pipelines on Kubernetes clusters in the cloud through our Kubeflow integration. Switching from your local stack to a cloud stack is easy to do with our CLI tool.

4. 🧩 Visualize the Steps of your Pipeline

It’s not uncommon for pipelines to be made up of many steps, and those steps can interact and intersect with one another in often complex patterns. We’ve built a way for you to inspect what’s going on with your ZenML pipeline:

Here's what the pipeline lineage tracking visualizer looks like

5. 📊 Visualize Statistics

Now you can use awesome third-party libraries to visualize ZenML steps and artifacts. We support the facets visualization for statistics out of the box, to find data drift between your training and test sets.

We use the built-in FacetStatisticsVisualizer using the Facets Overview integration.

Here’s what the statistics visualizer looks like

6. 🧐 Introspect your Pipeline Results

Once you've run your experiment, you need a way of seeing what was produced and how it was produced. We offer a flexible interface to support post-execution workflows. This allows you to access any of the artifacts produced by pipeline steps as well as any associated metadata.

pipeline = repo.get_pipeline(pipeline_name=..., stack_key=...) # access a pipeline by name and/or stack key
runs = pipeline.runs  # all runs of a pipeline chronologically ordered
run = runs[-1]  # latest run
steps = run.steps  # all steps of a pipeline
step = steps[0] 
output = step.output
df = output.read(materializer_class=PandasMaterializer)
df.head()

7. 🛠 Configure Pipeline Runs with YAML Code

Not everyone wants to keep their configuration of pipeline runs in the same place as the active code defining steps. You can define the particular customization of runs with YAML code if that's your jam!

steps:
  step_name:
    parameters:
      parameter_name: parameter_value
      some_other_parameter_name: 2
  some_other_step_name:
    ...

🤸 Getting Started

💾 Install ZenML

Requirements: ZenML supports Python 3.7 and 3.8.

ZenML is available for easy installation into your environment via PyPI:

pip install zenml

Alternatively, if you’re feeling brave, feel free to install the bleeding edge: NOTE: Do so on your own risk, no guarantees given!

pip install git+https://github.com/zenml-io/zenml.git@main --upgrade

ZenML is also available as a Docker image hosted publicly on DockerHub. Use the following command to get started in a bash environment:

docker run -it zenmldocker/zenml /bin/bash

🚅 Quickstart

The quickest way to get started is to create a simple pipeline.

Step 1: Initialize a ZenML repo

zenml init
zenml integration install sklearn # we use scikit-learn for this example

Step 2: Assemble, run, and evaluate your pipeline locally

import numpy as np
from sklearn.base import ClassifierMixin

from zenml.integrations.sklearn.helpers.digits import get_digits, get_digits_model
from zenml.pipelines import pipeline
from zenml.steps import step, Output

@step
def importer() -> Output(
    X_train=np.ndarray, X_test=np.ndarray, y_train=np.ndarray, y_test=np.ndarray
):
    """Loads the digits array as normal numpy arrays."""
    X_train, X_test, y_train, y_test = get_digits()
    return X_train, X_test, y_train, y_test


@step
def trainer(
    X_train: np.ndarray,
    y_train: np.ndarray,
) -> ClassifierMixin:
    """Train a simple sklearn classifier for the digits dataset."""
    model = get_digits_model()
    model.fit(X_train, y_train)
    return model


@step
def evaluator(
    X_test: np.ndarray,
    y_test: np.ndarray,
    model: ClassifierMixin,
) -> float:
    """Calculate the accuracy on the test set"""
    test_acc = model.score(X_test, y_test)
    print(f"Test accuracy: {test_acc}")
    return test_acc


@pipeline
def mnist_pipeline(
    importer,
    trainer,
    evaluator,
):
    """Links all the steps together in a pipeline"""
    X_train, X_test, y_train, y_test = importer()
    model = trainer(X_train=X_train, y_train=y_train)
    evaluator(X_test=X_test, y_test=y_test, model=model)


pipeline = mnist_pipeline(
    importer=importer(),
    trainer=trainer(),
    evaluator=evaluator(),
)
pipeline.run()

🗂️ ZenFiles

ZenFiles are production-grade ML use-cases powered by ZenML. They are fully fleshed out, end-to-end, projects that showcase ZenML's capabilities. They can also serve as a template from which to start similar projects.

The ZenFiles project is fully maintained and can be viewed as a sister repository of ZenML. Check it out here.

🗺 Roadmap

ZenML is being built in public. The roadmap is a regularly updated source of truth for the ZenML community to understand where the product is going in the short, medium, and long term.

ZenML is managed by a core team of developers that are responsible for making key decisions and incorporating feedback from the community. The team oversees feedback via various channels, and you can directly influence the roadmap as follows:

🙋‍♀️ Contributing & Community

We would love to develop ZenML together with our community! Best way to get started is to select any issue from the good-first-issue label. If you would like to contribute, please review our Contributing Guide for all relevant details.


Repobeats analytics image

🆘 Where to get help

First point of call should be our Slack group. Ask your questions about bugs or specific use cases and someone from the core team will respond.

📜 License

ZenML is distributed under the terms of the Apache License Version 2.0. A complete version of the license is available in the LICENSE.md in this repository. Any contribution made to this project will be licensed under the Apache License Version 2.0.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zenml-0.7.3.tar.gz (329.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zenml-0.7.3-py3-none-any.whl (551.5 kB view details)

Uploaded Python 3

File details

Details for the file zenml-0.7.3.tar.gz.

File metadata

  • Download URL: zenml-0.7.3.tar.gz
  • Upload date:
  • Size: 329.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.10 Linux/5.13.0-1022-azure

File hashes

Hashes for zenml-0.7.3.tar.gz
Algorithm Hash digest
SHA256 9b7359ca450b900ea93355186a11954a5e6c3ca9ffd7327b1b76270331c77e29
MD5 1460d004c56b6f1a90230674f5e61c9a
BLAKE2b-256 f0ea6d8a367fc8473ffb76301f917467065f32a53e4304caec9c538e3220d160

See more details on using hashes here.

File details

Details for the file zenml-0.7.3-py3-none-any.whl.

File metadata

  • Download URL: zenml-0.7.3-py3-none-any.whl
  • Upload date:
  • Size: 551.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.8.10 Linux/5.13.0-1022-azure

File hashes

Hashes for zenml-0.7.3-py3-none-any.whl
Algorithm Hash digest
SHA256 be0d49fea89f1699eab8811ba57a05562ad038de3dc09d7f55a766e614e279f5
MD5 4f4b9a345a61f2c540665e886b763d36
BLAKE2b-256 b44ae1e3a6da5faf5838921963eb7529c545934566c31cb2e2d0d451d533ea1d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page