Skip to main content

ZenML: MLOps for Reliable AI: from Classical AI to Agents.

Project description


ZenML Header

One AI Platform From Pipelines to Agents

PyPi PyPi PyPi Contributors License

ProjectsRoadmapChangelogReport BugSign up for ZenML ProBlogDocs

🎉 For the latest release, see the changelog.


ZenML is built for ML or AI Engineers working on traditional ML use-cases, LLM workflows, or agents, in a company setting.

At it's core, ZenML allows you to write workflows (pipelines) that run on any infrastructure backend (stacks). You can embed any Pythonic logic within these pipelines, like training a model, or running an agentic loop. ZenML then operationalizes your application by:

  1. Automatically containerizing and tracking your code.
  2. Tracking individual runs with metrics, logs, and metadata.
  3. Abstracting away infrastructure complexity.
  4. Integrating your existing tools and infrastructure e.g. MLflow, Langgraph, Langfuse, Sagemaker, GCP Vertex, etc.
  5. Allowing you to quickly iterate on experiments with an observable layer, in development and in production.

...amongst many other features.

ZenML is used by thousands of companies to run their AI workflows. Here are some featured ones:

Airbus     AXA     JetBrains     Rivian     WiseTech Global     Brevo
Leroy Merlin     Koble     Playtika     NIQ     Enel

(please email support@zenml.io if you want to be featured)

🚀 Get Started (5 minutes)

# Install ZenML with server capabilities
pip install "zenml[server]"  # pip install zenml will install a slimmer client

# Initialize your ZenML repository
zenml init

# Start local server or connect to a remote one
zenml login

You can then explore any of the examples in this repo. We recommend starting with the quickstart, which demonstrates core ZenML concepts: pipelines, steps, artifacts, snapshots, and deployments.

🏗️ Architecture Overview

ZenML uses a client-server architecture with an integrated web dashboard (zenml-io/zenml-dashboard):

  • Local Development: pip install "zenml[local]" - runs both client and server locally
  • Production: Deploy server separately, connect with pip install zenml + zenml login <server-url>

🎮 Demo

Here is a short demo:

Watch the video

🖼️ Resources

The best way to learn about ZenML is through our comprehensive documentation and tutorials:

📚 More examples

  1. Agent Architecture Comparison - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
  2. Deploying ML Models - Deploy classical ML models as production endpoints with monitoring and versioning
  3. Deploying Agents - Document analysis service with pipelines, evaluation, and embedded web UI
  4. E2E Batch Inference - Complete MLOps pipeline with feature engineering
  5. LLM RAG Pipeline - Production RAG with evaluation loops
  6. Agentic Workflow (Deep Research) - Orchestrate your agents with ZenML
  7. Fine-tuning Pipeline - Fine-tune and deploy LLMs

🗣️ Chat With Your Pipelines: ZenML MCP Server

Stop clicking through dashboards to understand your ML workflows. The ZenML MCP Server lets you query your pipelines, analyze runs, and trigger deployments using natural language through Claude Desktop, Cursor, or any MCP-compatible client.

💬 "Which pipeline runs failed this week and why?"
📊 "Show me accuracy metrics for all my customer churn models"  
🚀 "Trigger the latest fraud detection pipeline with production data"

Quick Setup:

  1. Download the .dxt file from zenml-io/mcp-zenml
  2. Drag it into Claude Desktop settings
  3. Add your ZenML server URL and API key
  4. Start chatting with your ML infrastructure

The MCP (Model Context Protocol) integration transforms your ZenML metadata into conversational insights, making pipeline debugging and analysis as easy as asking a question. Perfect for teams who want to democratize access to ML operations without requiring dashboard expertise.

🤖 Kitaru: Durable AI Agents

Building AI agents that need to survive crashes, pause for human approval, or run on cloud infrastructure? Kitaru is our open-source sister project for making Python agents durable.

  • Crash recovery — checkpoint and replay from failure, not from scratch
  • Human-in-the-loop — built-in approval gates and wait points
  • Persistent memory — versioned, durable state across agent runs with full audit trail
  • Framework agnostic — works with PydanticAI, CrewAI, or raw Python
  • Runs anywhere — local, Kubernetes, Vertex AI, SageMaker, AzureML

Built on the same infrastructure that powers ZenML. Two decorators (@flow + @checkpoint) and you're done.

pip install kitaru

👉 kitaru.ai · GitHub · Docs

🎓 Books & Resources

ZenML is featured in these comprehensive guides to production AI systems.

🤝 Join ML Engineers Building the Future of AI

Contribute:

Stay Updated:

  • 🗺 Public Roadmap - See what's coming next
  • 📰 Blog - Best practices and case studies
  • 🎙 Slack - Talk with AI practitioners

❓ FAQs from ML Engineers Like You

Q: "Do I need to rewrite my agents or models to use ZenML?"

A: No. Wrap your existing code in a @step. Keep using scikit-learn, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.

Q: "How is this different from LangSmith/Langfuse?"

A: They provide excellent observability for LLM applications. We orchestrate the full MLOps lifecycle for your entire AI stack. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.

Q: "Can I use my existing MLflow/W&B setup?"

A: Yes! ZenML integrates with both MLflow and Weights & Biases. Your experiments, our pipelines.

Q: "Is this just MLflow with extra steps?"

A: No. MLflow tracks experiments. We orchestrate the entire development process – from training and evaluation to deployment and monitoring – for both models and agents.

Q: "How do I configure ZenML with Kubernetes?"

A: ZenML integrates with Kubernetes through the native Kubernetes orchestrator, Kubeflow, and other K8s-based orchestrators. See our Kubernetes orchestrator guide and Kubeflow guide, plus deployment documentation.

Q: "What about cost? I can't afford another platform."

A: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.

🛠 VS Code / Cursor Extension

Manage pipelines directly from your editor:

🖥️ VS Code Extension in Action!
ZenML Extension

Install from VS Code Marketplace.

📜 License

ZenML is distributed under the terms of the Apache License Version 2.0. See LICENSE for details.


Linux Foundation Silver Member      CNCF Silver Member

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zenml-0.94.2.tar.gz (7.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zenml-0.94.2-py3-none-any.whl (8.2 MB view details)

Uploaded Python 3

File details

Details for the file zenml-0.94.2.tar.gz.

File metadata

  • Download URL: zenml-0.94.2.tar.gz
  • Upload date:
  • Size: 7.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zenml-0.94.2.tar.gz
Algorithm Hash digest
SHA256 e2bc7f245bb910b4cc4bd902cc2b0c7fa2fec4c1c004ea61f24b1aca3aac4efa
MD5 f5f00be1d7352c816be19c902ecfba25
BLAKE2b-256 5b381496321bbbacf82aa716afb74108a6aeb72fa958201f14af71d06ecaf1ac

See more details on using hashes here.

Provenance

The following attestation bundles were made for zenml-0.94.2.tar.gz:

Publisher: release.yml on zenml-io/zenml

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file zenml-0.94.2-py3-none-any.whl.

File metadata

  • Download URL: zenml-0.94.2-py3-none-any.whl
  • Upload date:
  • Size: 8.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for zenml-0.94.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c88bbca1a708092d3cd6274af46b2ada1e76e72636d6d4fded819a155e760dfe
MD5 cb455e646107769b57d36b27fbd856a6
BLAKE2b-256 c392f662ca3d9d797b26bdff1dfd5c976f3c83de1616e72d56ad495387e5377e

See more details on using hashes here.

Provenance

The following attestation bundles were made for zenml-0.94.2-py3-none-any.whl:

Publisher: release.yml on zenml-io/zenml

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page