Skip to main content

Lightweight Orchestrated Operational Mesh — Actor-based multi-LLM agent framework

Project description

Heddle

CI Docs codecov Ruff License: MPL 2.0 Python 3.11+

Heddle v0.9.0 Status: Active Development

Turn what you know into testable AI steps. Chain them into workflows. Measure whether they work. Scale when ready.

Architecture Overview


Try It in 60 Seconds

pip install heddle-ai[workshop]                         # install from PyPI
heddle setup                                            # configure (auto-detects Ollama)
heddle workshop                                         # open web UI at localhost:8080

Open your browser → pick a worker (summarizer, classifier, extractor) → paste any text → click Run. No data files needed.

Have Telegram exports? Install with pip install heddle-ai[rag] instead, then run heddle rag ingest, heddle rag search, and heddle rag serve for full social media stream analysis.

Or from source:

git clone https://github.com/getheddle/heddle.git && cd heddle
uv sync --extra workshop
uv run heddle setup

No servers to run. No configuration files to write. The setup wizard handles everything.


What Heddle Does

Most AI tools give you one big prompt and one model. That works until it doesn't — the prompt gets unwieldy, you can't test parts independently, and asking the same model to review its own work doesn't catch real problems.

Heddle splits AI work into focused steps. Each step has a clear job, a typed contract (so you know what goes in and what comes out), and can use a different model. You test steps individually, chain them into pipelines, and measure whether changes help or hurt.

  Document ──► Extract ──► Classify ──► Summarize ──► Report
                 │            │            │
                 │            │            └─ Claude Opus (complex reasoning)
                 │            └─ Ollama local (fast, free)
                 └─ Ollama local (fast, free)

Steps run in parallel when they can, and are tested with the built-in Workshop web UI — all without deploying any infrastructure. When you're ready to scale, Heddle adds a message bus (NATS) that connects everything for production use.

The key idea: the bottleneck with AI is never the model's knowledge — it's your ability to give it clear, precise instructions. Heddle makes those instructions testable, version-tracked, and composable. The deeper argument for this approach is in Why Heddle?.


Who This Is For

Anyone hitting the limits of single-prompt AI. Whether you're a student comparing how different models answer questions, a teacher grading essays and checking for bias, or a city clerk categorizing public comments — if you need more than one AI step working together, Heddle gives you a structured way to build that. Start with the six shipped workers in Workshop. No coding needed.

Researchers and analysts — process documents, extract data, build analytical pipelines. Define your own workers in YAML, test them in Workshop, iterate until the output matches your judgment. Heddle's knowledge silos and blind audit pattern let you get genuine adversarial review of AI-generated analysis — not the pseudo-review you get when the same model checks its own work.

AI engineers — build multi-step LLM workflows with typed contracts, tool-use, knowledge injection, and pipeline orchestration. Test everything locally before deploying.

Platform teams — deploy to Kubernetes with rate limiting, model tier management, dead-letter handling, and OpenTelemetry tracing. Scale any component independently.


Three Ways to Use Heddle

1. Workshop (no setup beyond install)

Test shipped workers in the browser — paste text, get results:

heddle workshop                            # open web UI
# → Workers → summarizer → Test → paste text → Run

Six ready-made workers ship with Heddle: summarizer, classifier, extractor, translator, qa (question answering with source citations), and reviewer (quality review against configurable criteria).

2. Build your own steps (guided)

Scaffold workers and pipelines interactively — YAML is generated for you:

heddle new worker                   # create a step from prompts
heddle new pipeline                 # chain steps into a workflow
heddle validate configs/workers/*.yaml  # check your configs
heddle workshop                     # test and evaluate in the web UI

3. Distributed infrastructure (production)

For teams, continuous processing, or high-throughput scenarios:

heddle router --nats-url nats://localhost:4222
heddle worker --config configs/workers/summarizer.yaml --tier local
heddle pipeline --config configs/orchestrators/my_pipeline.yaml
heddle submit "Analyze the quarterly reports"

Scale any component by running more copies — NATS load-balances automatically.


Key Features

Feature What It Does
6 Ready-Made Workers Summarizer, classifier, extractor, translator, QA, reviewer — chain them immediately
Workshop Web UI for testing, evaluating, and comparing step outputs
Built-in Evaluation Test suites, scoring, golden dataset baselines, regression detection
Config-Driven Define workers in YAML — no Python code needed for LLM steps
Knowledge Silos Per-worker access control; blind audit workers can't see what they're reviewing
Pipeline Orchestration Chain steps with automatic dependency detection and parallelism
Three Model Tiers Local (Ollama), Standard (Claude Sonnet), Frontier (Claude Opus)
Document Processing PDF/DOCX extraction via MarkItDown (fast) or Docling (deep OCR)
RAG Pipeline Telegram channel ingestion, chunking, vector search (DuckDB or LanceDB)
Multi-Agent Councils Multi-round deliberation with protocols (debate, Delphi), convergence detection, transcript management
ChatBridge Adapters Use Claude, GPT-4, Ollama, or humans as council participants with session history
MCP Gateway Expose any workflow as an MCP server with a single YAML config
Config Wizard heddle setup auto-detects backends; heddle new scaffolds workers/pipelines
Live Monitoring TUI dashboard, OpenTelemetry tracing, dead-letter inspection
Deployment Docker Compose, Kubernetes manifests, mDNS discovery

Documentation

Start here:

Guide Description
Concepts How Heddle works — the mental model in plain language
Getting Started Install and get your first result
Why Heddle? How Heddle compares to other frameworks — and when not to use it
Workshop Tour What each Workshop screen does and when to use it
Configuration ~/.heddle/config.yaml reference and priority chain
CLI Reference All 19 commands with every flag and default
Workers Reference 6 shipped workers with I/O schemas and examples

Go deeper:

Guide Description
RAG Pipeline Social media stream analysis end-to-end
Multi-Agent Councils Structured deliberation with multiple LLM agents
Building Workflows Custom steps, pipelines, tools, knowledge
Workshop Web UI architecture and enhancement guide
Architecture System design, message flow, NATS subjects
Design Invariants Non-obvious design decisions (read before structural changes)
Troubleshooting Common issues and solutions
Deployment Local, Docker, and Kubernetes

Current State

Area Status Details
Core framework Complete Messages, contracts, config, workspace
LLM backends Complete Anthropic, Ollama, OpenAI-compatible
Workers & processors Complete Tool-use, knowledge silos, embeddings
Orchestration Complete Goal decomposition, pipelines, scheduling
RAG pipeline Complete Ingest, chunk, embed, search (DuckDB + LanceDB)
Workshop web UI Complete Test bench, eval runner, pipeline editor
MCP gateway Complete FastMCP 3.x, session tools, workshop tools
Multi-agent deliberation Complete Council framework, ChatBridge adapters, 3 protocols
Tests 1807 passing 90%+ coverage, no infrastructure needed

Get Involved

Use it. Start with heddle setup and go from there.

Contribute. New step types, contrib packages, test coverage, and docs are welcome. See Contributing.

Report issues. Bug reports with reproducible steps help the most.


AI-Assisted Development

This project uses Claude (Anthropic) as a development tool. CLAUDE.md documents the architecture and design rules for AI-assisted sessions. AI-generated code meets the same standards as human contributions: typed messages, stateless workers, validated contracts, tests.


License

MPL 2.0 — Modified source files must remain open; unmodified files can be combined with proprietary code. Alternative licensing available for organizations with copyleft constraints. Contact: admin@irantransitionproject.org

For governance, succession, and contributor rights, see GOVERNANCE.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

heddle_ai-0.9.1.tar.gz (924.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

heddle_ai-0.9.1-py3-none-any.whl (321.7 kB view details)

Uploaded Python 3

File details

Details for the file heddle_ai-0.9.1.tar.gz.

File metadata

  • Download URL: heddle_ai-0.9.1.tar.gz
  • Upload date:
  • Size: 924.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for heddle_ai-0.9.1.tar.gz
Algorithm Hash digest
SHA256 0f8fa85c92a4c88d9cc2c93594e829a7bdeae5c89217d0bcc0f93a40484a208c
MD5 29dc91fced1d7acee95e35493f764b26
BLAKE2b-256 897709735a92ee6d3803fe1de3fd70cd4c40e91063c206924119efd0ac2f66b6

See more details on using hashes here.

Provenance

The following attestation bundles were made for heddle_ai-0.9.1.tar.gz:

Publisher: publish.yml on getheddle/heddle

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file heddle_ai-0.9.1-py3-none-any.whl.

File metadata

  • Download URL: heddle_ai-0.9.1-py3-none-any.whl
  • Upload date:
  • Size: 321.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for heddle_ai-0.9.1-py3-none-any.whl
Algorithm Hash digest
SHA256 113d6e198c01c6feb3016f40e119a9475778bbc40342e8794fa94dce0632782f
MD5 47d62eb6579fc1ebeeb283f348f3fe26
BLAKE2b-256 9ad5c2bedfc0023c8b3c401af7ad65a49f7dd599d2dccab41a83d9a6973adb2e

See more details on using hashes here.

Provenance

The following attestation bundles were made for heddle_ai-0.9.1-py3-none-any.whl:

Publisher: publish.yml on getheddle/heddle

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page