Composable AI workflow framework built on LangGraph + MCP.
Project description
ai-workflows
A framework for building multi-step AI workflows that can plan, execute, validate, and recover from failures. Supports multiple models (Claude, Gemini, Ollama), human approval steps, and resumable runs with persistent state.
A LangGraph-native workflow framework for solo developers. Orchestrates multi-step AI workflows with durable state, multi-provider routing, and deterministic cost accounting across Gemini (via LiteLLM), Qwen (via Ollama), and Claude Code (via OAuth CLI subprocess).
Status
| Milestone | State |
|---|---|
| M1 — Reconciliation & cleanup | Complete (2026-04-19) |
| M2 — Graph-layer adapters + provider drivers | Complete (2026-04-19) |
M3 — First workflow (planner, single tier) |
Complete (2026-04-20) |
| M4 — MCP server (FastMCP) | Complete (2026-04-20) |
M5 — Multi-tier planner |
Complete (2026-04-20) |
M6 — slice_refactor DAG |
Complete (2026-04-20) |
| M7 — Eval harness | Complete (2026-04-21) |
| M8 — Ollama infrastructure | Complete (2026-04-21) |
| M9 — Claude Code skill packaging | Complete (2026-04-21) |
| M10 — Ollama fault-tolerance hardening | Planned |
| M11 — MCP gate-review surface | Complete (2026-04-22) |
| M12 — Tiered audit cascade | Complete (2026-04-29) |
| M13 — v0.1.0 release + PyPI packaging | Complete (2026-04-22) |
| M14 — MCP HTTP transport | Complete (2026-04-22) |
| M15 — Tier overlay + fallback chains | Complete (2026-04-30) |
| M16 — External workflows + primitives load path | Complete (2026-04-24) |
M17 — scaffold_workflow meta-workflow |
Complete (2026-04-30) |
| M19 — Declarative authoring surface | Complete (2026-04-26) |
| M20 — Autonomy loop optimization | Complete (2026-04-28) |
| M21 — Autonomy loop continuation | Complete (2026-04-29) |
What it is
ai-workflows exposes two surfaces over the same workflow registry: an aiw CLI for interactive and scripted use, and an aiw-mcp MCP server for Claude Code, Cursor, Zed, and browser-origin consumers (via streamable-HTTP). A workflow is a Python module that builds a LangGraph StateGraph composed of graph primitives (TieredNode, ValidatorNode, HumanGate, RetryingEdge) and registered by name. There is no hosted control plane and no Anthropic API dependency — Claude access is OAuth-only through the claude CLI subprocess.
Architecture at a glance
Four layers with a one-way dependency direction enforced by import-linter:
surfaces (ai_workflows.cli, ai_workflows.mcp)
↓
workflows (ai_workflows.workflows.*) — concrete LangGraph StateGraphs
↓
graph (ai_workflows.graph.*) — LangGraph adapters over primitives
↓
primitives (ai_workflows.primitives.*) — storage, cost, tiers, providers, retry, logging
Full overview in docs/architecture.md. Tutorials for authoring a new workflow or extending the graph layer live at docs/writing-a-workflow.md and docs/writing-a-graph-primitive.md.
Install
Requires Python ≥ 3.12 and uv.
One-shot via uvx — no persistent install; every invocation fetches the wheel into a cache:
uvx --from jmdl-ai-workflows aiw run planner --goal 'Write a release checklist' --run-id demo
Persistent tool install — puts aiw + aiw-mcp on PATH:
uv tool install jmdl-ai-workflows
aiw run planner --goal 'Write a release checklist' --run-id demo
Getting started
After installing (either path above), set your Gemini API key and drive a planner run end-to-end:
export GEMINI_API_KEY=...
aiw run planner --goal 'Write a release checklist' --run-id demo
aiw resume demo --approve
aiw list-runs
The planner workflow composes two LLM tiers (Qwen explorer via Ollama + Claude Code Opus synth). If you only want the Gemini path for a smoke, pass --tier-override planner-synth=planner-explorer or omit the Ollama + Claude Code prerequisites and stub the gemini_flash tier.
Setup
Both aiw and aiw-mcp auto-load a .env from the current working directory at startup (shell-exported values win over .env).
Key env vars:
GEMINI_API_KEY— required for any workflow using a Gemini tier (most defaults).OLLAMA_BASE_URL— defaulthttp://localhost:11434; override if your Ollama daemon listens elsewhere.AIW_STORAGE_DB/AIW_CHECKPOINT_DB— path overrides for the run registry and checkpoint databases (defaults:~/.ai-workflows/storage.sqlite3/~/.ai-workflows/checkpoint.sqlite3).
Claude Code tier: some workflows route to the claude CLI via OAuth. Install and authenticate it separately per Anthropic's setup docs. aiw never reads ANTHROPIC_API_KEY and never imports the anthropic SDK — Claude access is OAuth-only through the CLI subprocess.
Extending ai-workflows
ai-workflows is a declarative orchestration layer; extension is a first-class capability. Authors engage at four progressively-deeper tiers, each with a dedicated guide:
| Tier | When | Guide |
|---|---|---|
| 1 — Compose | You're combining built-in step types (LLMStep, ValidateStep, GateStep, TransformStep, FanOutStep) into a workflow. The happy path. |
docs/writing-a-workflow.md |
| 2 — Parameterise | You're configuring built-in steps (retry policy, response format, gate behaviour, tier choice). | docs/writing-a-workflow.md (same doc) |
| 3 — Author a custom step type | No built-in covers your need. Subclass Step; the framework wires your custom step into the graph like a built-in. |
docs/writing-a-custom-step.md |
| 4 — Escape to LangGraph directly | Your topology is genuinely non-standard (dynamic edge conditions, novel control flow). Use the legacy register(name, build_fn) API. |
docs/writing-a-graph-primitive.md |
The framework's promise: descending a tier never forces you to reverse-engineer framework source. If you're at the wrong tier, you'll find pointers to the right one in any guide.
MCP server
Register aiw-mcp with any MCP host — Claude Code, Cursor, Zed, or an HTTP client via the streamable-HTTP transport — to drive the same workflows inside-out:
claude mcp add ai-workflows --scope user -- uvx --from jmdl-ai-workflows aiw-mcp
The HTTP transport is opt-in for browser-origin consumers: aiw-mcp --transport http --port 8080 --cors-origin http://localhost:3000.
Registering your own workflow modules from a downstream package? AIW_EXTRA_WORKFLOW_MODULES=pkg.workflows.your_workflow (or --workflow-module pkg.workflows.your_workflow, repeatable) imports them at startup. See docs/writing-a-workflow.md §External workflows from a downstream consumer.
Security notes
- Loopback default —
aiw-mcp --transport httpbinds to127.0.0.1; unreachable from other machines.--host 0.0.0.0exposes the server to every process on the host and to the LAN.aiw-mcphas no built-in auth; the bind address is the only access boundary. Only pass--host 0.0.0.0on a machine you own every process on, and put a reverse proxy in front if you need TLS. - CORS is opt-in, exact-match —
--cors-origin <url>adds one origin; without any flags the server emits noAccess-Control-Allow-Originheader (same-origin only). Not required for stdio or loopback HTTP.
Contributing / from source
Clone the repo for development or to modify the framework itself:
git clone https://github.com/yeevon/ai-workflows.git
cd ai-workflows
uv sync # install runtime + dev dependencies
uv run aiw version # prints the current __version__ (0.4.0 at M17 close)
For the full builder/auditor workflow — task specs, audit issue files, Builder / Auditor mode conventions — switch to the design_branch.
Development
Three gates guard every change:
uv run pytest # unit + scaffolding tests (hermetic; skips e2e unless AIW_E2E=1)
uv run lint-imports # four-layer import contract
uv run ruff check # style + basic correctness
Next
M21 is complete. The next planned milestone is M22, which will address any operator-resume items from M20/M21 (including T06/T07 dynamic model dispatch if the GO/NO-GO verdict fires) and further autonomy-loop improvements identified from M21's empirical baseline.
Roadmap + per-milestone task files live at design_docs/roadmap.md (builder-only, on design branch).
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file jmdl_ai_workflows-0.4.0-py3-none-any.whl.
File metadata
- Download URL: jmdl_ai_workflows-0.4.0-py3-none-any.whl
- Upload date:
- Size: 238.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df45a59b2480602444ffa942beb2d009eab58169d75e99ab09ca305c2aa897c4
|
|
| MD5 |
ebc739dcfcb4f987adfb77a818539455
|
|
| BLAKE2b-256 |
7a730c0bcb2492c95571b83ad709ca2e03c2a73052b5851148c74512f2399d0b
|