Composable AI workflow framework built on LangGraph + MCP.
Project description
ai-workflows
A framework for building multi-step AI workflows that can plan, execute, validate, and recover from failures. Supports multiple models (Claude, Gemini, Ollama), human approval steps, and resumable runs with persistent state.
A LangGraph-native workflow framework for solo developers. Orchestrates multi-step AI workflows with durable state, multi-provider routing, and deterministic cost accounting across Gemini (via LiteLLM), Qwen (via Ollama), and Claude Code (via OAuth CLI subprocess).
Status
| Milestone | State |
|---|---|
| M1 — Reconciliation & cleanup | Complete (2026-04-19) |
| M2 — Graph-layer adapters + provider drivers | Complete (2026-04-19) |
M3 — First workflow (planner, single tier) |
Complete (2026-04-20) |
| M4 — MCP server (FastMCP) | Complete (2026-04-20) |
M5 — Multi-tier planner |
Complete (2026-04-20) |
M6 — slice_refactor DAG |
Complete (2026-04-20) |
| M7 — Eval harness | Complete (2026-04-21) |
| M8 — Ollama infrastructure | Complete (2026-04-21) |
| M9 — Claude Code skill packaging | Complete (2026-04-21) |
| M10 — Ollama fault-tolerance hardening | Planned |
| M11 — MCP gate-review surface | Complete (2026-04-22) |
| M12 — Tiered audit cascade | Planned |
| M13 — v0.1.0 release + PyPI packaging | Complete (2026-04-22) |
| M14 — MCP HTTP transport | Complete (2026-04-22) |
What it is
ai-workflows exposes two surfaces over the same workflow registry: an aiw CLI for interactive and scripted use, and an aiw-mcp MCP server for Claude Code, Cursor, Zed, and browser-origin consumers (via streamable-HTTP). A workflow is a Python module that builds a LangGraph StateGraph composed of graph primitives (TieredNode, ValidatorNode, HumanGate, RetryingEdge) and registered by name. There is no hosted control plane and no Anthropic API dependency — Claude access is OAuth-only through the claude CLI subprocess.
Architecture at a glance
Four layers with a one-way dependency direction enforced by import-linter:
surfaces (ai_workflows.cli, ai_workflows.mcp)
↓
workflows (ai_workflows.workflows.*) — concrete LangGraph StateGraphs
↓
graph (ai_workflows.graph.*) — LangGraph adapters over primitives
↓
primitives (ai_workflows.primitives.*) — storage, cost, tiers, providers, retry, logging
Full overview in docs/architecture.md. Tutorials for authoring a new workflow or extending the graph layer live at docs/writing-a-workflow.md and docs/writing-a-graph-primitive.md.
Install
Requires Python ≥ 3.12 and uv.
One-shot via uvx — no persistent install; every invocation fetches the wheel into a cache:
uvx --from jmdl-ai-workflows aiw run planner --goal 'Write a release checklist' --run-id demo
Persistent tool install — puts aiw + aiw-mcp on PATH:
uv tool install jmdl-ai-workflows
aiw run planner --goal 'Write a release checklist' --run-id demo
Setup
aiw and aiw-mcp read configuration from environment variables. As of 0.1.1 both binaries auto-load a .env file from your current working directory at startup (shell-exported values always win over .env values).
Required for most workflows
GEMINI_API_KEY— Google Gemini key, used by the LiteLLM adapter. Get one at https://aistudio.google.com/apikey. Required for any workflow that routes through a Gemini tier (the defaultplanner-explorer/planner-synthpaths cover this).
Optional
OLLAMA_BASE_URL— defaulthttp://localhost:11434. Set this if your Ollama daemon listens elsewhere. Needed only when a workflow routes to the local Qwen tier.AIW_STORAGE_DB— path override for the run registry database. Defaults to~/.ai-workflows/storage.sqlite3.AIW_CHECKPOINT_DB— path override for the LangGraph checkpoint database. Defaults to~/.ai-workflows/checkpoint.sqlite3.
Claude Code tier (no API key needed)
Some workflows route to the claude CLI via OAuth — install and authenticate it separately per Anthropic's setup docs. aiw never reads ANTHROPIC_API_KEY and never imports the anthropic SDK; Claude access is OAuth-only through the CLI subprocess.
.env auto-load
Create a .env in the directory you run aiw / aiw-mcp from:
# .env
GEMINI_API_KEY=your-key-here
# OLLAMA_BASE_URL=http://localhost:11434 # uncomment if non-default
A shell-exported value wins over the .env value — right precedence for CI pipelines, direnv setups, and one-off inline overrides (GEMINI_API_KEY=x aiw run ...).
Troubleshooting
Seeing a 401 Unauthorized from LiteLLM? Your GEMINI_API_KEY is missing, invalid, or out-of-quota. Confirm echo $GEMINI_API_KEY in the shell you're running aiw from, or verify the value in your .env.
Getting started
After Setup, drive a planner run end-to-end:
aiw run planner --goal 'Write a release checklist' --run-id demo
aiw resume demo --approve
aiw list-runs
The planner workflow composes two LLM tiers (Qwen explorer via Ollama + Claude Code Opus synth). If you only want the Gemini path for a smoke, pass --tier-override planner-synth=planner-explorer or omit the Ollama + Claude Code prerequisites and stub the gemini_flash tier.
MCP server
Register aiw-mcp with any MCP host — Claude Code, Cursor, Zed, or an HTTP client via the streamable-HTTP transport — to drive the same workflows inside-out:
claude mcp add ai-workflows --scope user -- uvx --from jmdl-ai-workflows aiw-mcp
The HTTP transport is opt-in for browser-origin consumers: aiw-mcp --transport http --port 8080 --cors-origin http://localhost:3000. Full skill-install walkthrough (builder-only, on design branch).
Contributing / from source
Clone the repo for development or to modify the framework itself:
git clone https://github.com/yeevon/ai-workflows.git
cd ai-workflows
uv sync # install runtime + dev dependencies
uv run aiw version # prints 0.1.0
For the full builder/auditor workflow — task specs, audit issue files, Builder / Auditor mode conventions — switch to the design_branch (builder-only, on design branch).
Development
Three gates guard every change:
uv run pytest # unit + scaffolding tests (hermetic; skips e2e unless AIW_E2E=1)
uv run lint-imports # four-layer import contract
uv run ruff check # style + basic correctness
Next
Roadmap + per-milestone task files live at design_docs/roadmap.md (builder-only, on design branch).
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file jmdl_ai_workflows-0.1.1.tar.gz.
File metadata
- Download URL: jmdl_ai_workflows-0.1.1.tar.gz
- Upload date:
- Size: 615.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5679dc3a5047472489b6b73035902168398a727927159d73c0c02e9fdf93655c
|
|
| MD5 |
5eb93e9e96e50f254c20c816aa32db91
|
|
| BLAKE2b-256 |
a94a59f1953d0cd71b8a7bfcf7f99be247009db733c90786440cce62d755011b
|
File details
Details for the file jmdl_ai_workflows-0.1.1-py3-none-any.whl.
File metadata
- Download URL: jmdl_ai_workflows-0.1.1-py3-none-any.whl
- Upload date:
- Size: 158.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.8 {"installer":{"name":"uv","version":"0.10.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
731d58f597af2d1109f3a40819f2d0b5f829ae659d5906fd692a2e0e6fe40b1e
|
|
| MD5 |
52deca6245b7e40c385e41ca6a3c88ab
|
|
| BLAKE2b-256 |
7392f3f93c0dddd591bfb75f53299733041d5bb64054c7cc1c10748525f3bb53
|