Skip to main content

Durable execution for AI agents, built on ZenML

Project description

Kitaru

You build the agent. Kitaru runs everything around it.

Kitaru (来る, "to arrive") is the open-source platform layer for AI agents in production. Wrap an existing agent or write raw Python — Kitaru gives you checkpointed execution, human-in-the-loop waits, durable memory, and deployment on any cloud. Any framework. Any model.

PyPI Python License

Docs · Quick Start · Examples · Getting Started Guide · Roadmap · Community


Kitaru Dashboard

Your agent crashed at step 7. Kitaru replays from step 7 — not from scratch.

Add two decorators to your existing Python code and get crash recovery, human approval gates, durable memory, and a full dashboard. No rewrite. No graph DSL. No framework lock-in.

Why Kitaru?

Works with your agent SDK

Wrap an existing PydanticAI agent with KitaruAgent — no rewrite. For agents built on the OpenAI Agents SDK, Anthropic Agent SDK, or raw Python, use @flow and @checkpoint around your calls. Your model, your tools, your framework — Kitaru wraps them, not the other way around.

from kitaru import flow
from kitaru.adapters.pydantic_ai import KitaruAgent
from pydantic_ai import Agent

researcher = KitaruAgent(
    Agent("openai:gpt-5.4", system_prompt="You summarize research topics.")
)

@flow
def research_flow(topic: str) -> str:
    return researcher.run_sync(topic).output

Python-first, no graph DSL

Write normal Python. Use if, for, try/except — whatever your agent needs. Kitaru gives you two decorators (@flow and @checkpoint) and a handful of utility functions. That's it.

from kitaru import checkpoint, flow

@checkpoint
def research(topic: str) -> str:
    return do_research(topic)

@checkpoint
def write_draft(research: str) -> str:
    return generate_draft(research)

@flow
def writing_agent(topic: str) -> str:
    data = research(topic)
    return write_draft(data)

result = writing_agent.run("quantum computing").wait()

Durable execution and memory

Kitaru keeps agent state on disk and in infrastructure, not just in process memory. Checkpoints persist intermediate outputs so you can replay from failure, resume waiting runs, and inspect what happened. Durable memory adds scoped, versioned state for long-running agents across Python, CLI, client, and MCP surfaces.

Deploy on your cloud

No workers, no message queues, no distributed systems PhD required. Kitaru runs locally with zero config, and scales to production with a single server backed by a SQL database. Deploy your agents to Kubernetes, Vertex AI, SageMaker, or AzureML using Kitaru's stack abstraction. Your registry, your deployer, your infrastructure.

Built-in UI

Every execution is observable from day one. See your agent runs, inspect checkpoint outputs, and approve human-in-the-loop wait steps, all from a visual dashboard that ships with the Kitaru server.

To start the server locally, run kitaru login after installing kitaru[local]. To connect to an existing remote server, run kitaru login <server>.

Quick Start

Install

pip install kitaru

Or with uv (recommended):

uv pip install kitaru

To wrap a PydanticAI agent, install the adapter extra:

uv pip install "kitaru[pydantic-ai]"

Optional: start a local Kitaru server

Flows run locally by default with the base install. If you also want the local dashboard and REST API, install the local extra and then run bare kitaru login:

uv pip install "kitaru[local]"
kitaru login
kitaru status

Optional: connect to an existing remote Kitaru server

If you already have a deployed Kitaru server, connect to it explicitly:

kitaru login https://my-server.example.com
# add --project <PROJECT> or other remote-login flags if your setup requires them
kitaru status

Initialize your project

kitaru init

Write your first flow

# agent.py
from kitaru import checkpoint, flow

@checkpoint
def fetch_data(url: str) -> str:
    return "some data"

@checkpoint
def process_data(data: str) -> str:
    return data.upper()

@flow
def my_agent(url: str) -> str:
    data = fetch_data(url)
    return process_data(data)

result = my_agent.run("https://example.com").wait()
print(result)  # SOME DATA

Run it

python agent.py

Every checkpoint's output is persisted automatically. You can inspect what happened, replay from any checkpoint, or resume a waiting flow:

kitaru executions list
kitaru executions get <EXECUTION_ID>
kitaru executions logs <EXECUTION_ID>
kitaru executions replay <EXECUTION_ID> --from process_data

Learn more

Resource Description
Getting Started Guide Full setup walkthrough with all examples
Documentation Complete reference and guides
PydanticAI adapter Wrap a PydanticAI agent with KitaruAgent
Memory guide Durable memory concepts, scopes, history, and compaction
Examples Runnable workflows for every feature
Stacks Deploy to Kubernetes, AWS, GCP, or Azure

Origins

Kitaru is built by the team behind ZenML, drawing on five years of production orchestration experience (JetBrains, Adeo, Brevo). The orchestration primitives (stacks, artifacts, lineage) are purpose-rebuilt here for autonomous agents.

Contributing

We welcome contributions! See CONTRIBUTING.md for development setup, code style, and how to submit changes. The default branch is develop — all PRs should target it.

Community and support

  • Discussions — ask questions, share ideas
  • Issues — report bugs, request features
  • Roadmap — see what's coming next
  • Docs — guides and reference

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kitaru-0.5.1.tar.gz (11.8 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kitaru-0.5.1-py3-none-any.whl (2.9 MB view details)

Uploaded Python 3

File details

Details for the file kitaru-0.5.1.tar.gz.

File metadata

  • Download URL: kitaru-0.5.1.tar.gz
  • Upload date:
  • Size: 11.8 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for kitaru-0.5.1.tar.gz
Algorithm Hash digest
SHA256 978be3710b57943ab8bcd22463c51426ecbebce6ea06bfcd32bf03a433a58673
MD5 d4c433fc69fcf2e1f4bb049cabe7b6e6
BLAKE2b-256 9d8c0a542bfaf9bd449f5c1b4335208b5e438a3b63b9c7d3644a7afe8a818251

See more details on using hashes here.

Provenance

The following attestation bundles were made for kitaru-0.5.1.tar.gz:

Publisher: release.yml on zenml-io/kitaru

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file kitaru-0.5.1-py3-none-any.whl.

File metadata

  • Download URL: kitaru-0.5.1-py3-none-any.whl
  • Upload date:
  • Size: 2.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for kitaru-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 00e9eaeb564ce88321e849bf708dddbd362f460803fd827e3bcf0fef1f81d19f
MD5 f299e4bc9dc59d276fdf616b3ef29ff8
BLAKE2b-256 d9482fe18bbb3e48eaee67fae39e832e755504bdc702cd88329f7a871a5fe622

See more details on using hashes here.

Provenance

The following attestation bundles were made for kitaru-0.5.1-py3-none-any.whl:

Publisher: release.yml on zenml-io/kitaru

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page