Skip to main content

ZenML: MLOps for Reliable AI: from Classical AI to Agents.

Project description


ZenML Header

One AI Platform From Pipelines to Agents

PyPi PyPi PyPi Contributors License

ProjectsRoadmapChangelogReport BugSign up for ZenML ProBlogDocs

🎉 For the latest release, see the changelog.


ZenML is built for ML or AI Engineers working on traditional ML use-cases, LLM workflows, or agents, in a company setting.

At it's core, ZenML allows you to write workflows (pipelines) that run on any infrastructure backend (stacks). You can embed any Pythonic logic within these pipelines, like training a model, or running an agentic loop. ZenML then operationalizes your application by:

  1. Automatically containerizing and tracking your code.
  2. Tracking individual runs with metrics, logs, and metadata.
  3. Abstracting away infrastructure complexity.
  4. Integrating your existing tools and infrastructure e.g. MLflow, Langgraph, Langfuse, Sagemaker, GCP Vertex, etc.
  5. Allowing you to quickly iterate on experiments with an observable layer, in development and in production.

...amongst many other features.

ZenML is used by thousands of companies to run their AI workflows. Here are some featured ones:

Airbus     AXA     JetBrains     Rivian     WiseTech Global     Brevo
Leroy Merlin     Koble     Playtika     NIQ     Enel

(please email support@zenml.io if you want to be featured)

🚀 Get Started (5 minutes)

# Install ZenML with server capabilities
pip install "zenml[server]"  # pip install zenml will install a slimmer client

# Initialize your ZenML repository
zenml init

# Start local server or connect to a remote one
zenml login

You can then explore any of the examples in this repo. We recommend starting with the quickstart, which demonstrates core ZenML concepts: pipelines, steps, artifacts, snapshots, and deployments.

🏗️ Architecture Overview

ZenML uses a client-server architecture with an integrated web dashboard (zenml-io/zenml-dashboard):

  • Local Development: pip install "zenml[local]" - runs both client and server locally
  • Production: Deploy server separately, connect with pip install zenml + zenml login <server-url>

🎮 Demo

Here is a short demo:

Watch the video

🖼️ Resources

The best way to learn about ZenML is through our comprehensive documentation and tutorials:

📚 More examples

  1. Agent Architecture Comparison - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
  2. Deploying ML Models - Deploy classical ML models as production endpoints with monitoring and versioning
  3. Deploying Agents - Document analysis service with pipelines, evaluation, and embedded web UI
  4. E2E Batch Inference - Complete MLOps pipeline with feature engineering
  5. LLM RAG Pipeline - Production RAG with evaluation loops
  6. Agentic Workflow (Deep Research) - Orchestrate your agents with ZenML
  7. Fine-tuning Pipeline - Fine-tune and deploy LLMs

🗣️ Chat With Your Pipelines: ZenML MCP Server

Stop clicking through dashboards to understand your ML workflows. The ZenML MCP Server lets you query your pipelines, analyze runs, and trigger deployments using natural language through Claude Desktop, Cursor, or any MCP-compatible client.

💬 "Which pipeline runs failed this week and why?"
📊 "Show me accuracy metrics for all my customer churn models"  
🚀 "Trigger the latest fraud detection pipeline with production data"

Quick Setup:

  1. Download the .dxt file from zenml-io/mcp-zenml
  2. Drag it into Claude Desktop settings
  3. Add your ZenML server URL and API key
  4. Start chatting with your ML infrastructure

The MCP (Model Context Protocol) integration transforms your ZenML metadata into conversational insights, making pipeline debugging and analysis as easy as asking a question. Perfect for teams who want to democratize access to ML operations without requiring dashboard expertise.

🎓 Books & Resources

ZenML is featured in these comprehensive guides to production AI systems.

🤝 Join ML Engineers Building the Future of AI

Contribute:

Stay Updated:

  • 🗺 Public Roadmap - See what's coming next
  • 📰 Blog - Best practices and case studies
  • 🎙 Slack - Talk with AI practitioners

❓ FAQs from ML Engineers Like You

Q: "Do I need to rewrite my agents or models to use ZenML?"

A: No. Wrap your existing code in a @step. Keep using scikit-learn, PyTorch, LangGraph, LlamaIndex, or raw API calls. ZenML orchestrates your tools, it doesn't replace them.

Q: "How is this different from LangSmith/Langfuse?"

A: They provide excellent observability for LLM applications. We orchestrate the full MLOps lifecycle for your entire AI stack. With ZenML, you manage both your classical ML models and your AI agents in one unified framework, from development and evaluation all the way to production deployment.

Q: "Can I use my existing MLflow/W&B setup?"

A: Yes! ZenML integrates with both MLflow and Weights & Biases. Your experiments, our pipelines.

Q: "Is this just MLflow with extra steps?"

A: No. MLflow tracks experiments. We orchestrate the entire development process – from training and evaluation to deployment and monitoring – for both models and agents.

Q: "How do I configure ZenML with Kubernetes?"

A: ZenML integrates with Kubernetes through the native Kubernetes orchestrator, Kubeflow, and other K8s-based orchestrators. See our Kubernetes orchestrator guide and Kubeflow guide, plus deployment documentation.

Q: "What about cost? I can't afford another platform."

A: ZenML's open-source version is free forever. You likely already have the required infrastructure (like a Kubernetes cluster and object storage). We just help you make better use of it for MLOps.

🛠 VS Code / Cursor Extension

Manage pipelines directly from your editor:

🖥️ VS Code Extension in Action!
ZenML Extension

Install from VS Code Marketplace.

📜 License

ZenML is distributed under the terms of the Apache License Version 2.0. See LICENSE for details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

zenml_nightly-0.93.1.dev20260129.tar.gz (7.0 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

zenml_nightly-0.93.1.dev20260129-py3-none-any.whl (8.1 MB view details)

Uploaded Python 3

File details

Details for the file zenml_nightly-0.93.1.dev20260129.tar.gz.

File metadata

  • Download URL: zenml_nightly-0.93.1.dev20260129.tar.gz
  • Upload date:
  • Size: 7.0 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: poetry/2.3.1 CPython/3.11.14 Linux/6.11.0-1018-azure

File hashes

Hashes for zenml_nightly-0.93.1.dev20260129.tar.gz
Algorithm Hash digest
SHA256 08a7527d657208924bd6fda41c18696e29ddd8f89e43ba19c272fe6209dc7dd1
MD5 3091e2c393eafc499ca9bdef1f64659f
BLAKE2b-256 0116b713cc3bba6728273080cc3bb0358766f66e3f2b13788febaa44e03773c6

See more details on using hashes here.

File details

Details for the file zenml_nightly-0.93.1.dev20260129-py3-none-any.whl.

File metadata

File hashes

Hashes for zenml_nightly-0.93.1.dev20260129-py3-none-any.whl
Algorithm Hash digest
SHA256 e9c65af76e2ccd50948bf02d9a5f4c195bf51795dee125cbd30b052e1d1011dd
MD5 e07bff2a5902e91bb3fb35bcb72f0eb9
BLAKE2b-256 7c09aba0eea32e169dd534bc7a17929b314569d03714f5779a7141cf79f3b154

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page