Lightweight Orchestrated Operational Mesh — Actor-based multi-LLM agent framework
Project description
Loom
Split complex AI work into focused steps. Test them individually. Chain them into workflows. Scale when you need to.
Try It in 60 Seconds
pip install loom-ai[rag] # install from PyPI
loom setup # configure (auto-detects Ollama)
loom rag ingest /path/to/telegram/exports/*.json
loom rag search "earthquake damage reports"
loom rag serve # open dashboard at localhost:8080
Or from source:
git clone https://github.com/IranTransitionProject/loom.git && cd loom
uv sync --extra rag
uv run loom setup
No servers to run. No configuration files to write. The setup wizard handles everything.
What Loom Does
Instead of one giant AI prompt that tries to do everything, Loom lets you break work into small, focused steps — each with a clear job, testable independently, and using the right model for the task.
Document ──► Extract ──► Classify ──► Summarize ──► Report
│ │ │
│ │ └─ Claude Opus (complex reasoning)
│ └─ Ollama local (fast, free)
└─ Ollama local (fast, free)
Steps can run in parallel, use different AI models, and be tested with the built-in Workshop web UI — all without deploying any infrastructure.
When you're ready to scale, Loom adds a message bus (NATS) that connects everything for production use.
Who This Is For
Researchers and analysts — analyze social media streams, extract data from
documents, build knowledge graphs. Start with loom rag and the Workshop
dashboard. No infrastructure knowledge needed.
AI engineers — build multi-step LLM workflows with typed contracts, tool-use, knowledge injection, and pipeline orchestration. Test everything locally before deploying.
Platform teams — deploy to Kubernetes with rate limiting, model tier management, dead-letter handling, and OpenTelemetry tracing. Scale any component independently.
Three Ways to Use Loom
1. Command line (no setup)
Ingest data, search, and analyze — all from the terminal:
uv run loom rag ingest exports/*.json # ingest Telegram channels
uv run loom rag search "protest reports" # semantic search
uv run loom rag stats # store statistics
2. Build your own steps (guided)
Scaffold workers and pipelines interactively — YAML is generated for you:
uv run loom new worker # create a step from prompts
uv run loom new pipeline # chain steps into a workflow
uv run loom validate configs/workers/*.yaml # check your configs
uv run loom workshop --port 8080 # test and evaluate in the web UI
Six ready-made workers ship with Loom: summarizer, classifier, extractor, translator, qa (question answering with source citations), and reviewer (quality review against configurable criteria).
3. Distributed infrastructure (production)
For teams, continuous processing, or high-throughput scenarios:
uv run loom router --nats-url nats://localhost:4222
uv run loom worker --config configs/workers/summarizer.yaml --tier local
uv run loom pipeline --config configs/orchestrators/my_pipeline.yaml
uv run loom submit "Analyze the quarterly reports"
Scale any component by running more copies — NATS load-balances automatically.
Key Features
| Feature | What It Does |
|---|---|
| 6 Ready-Made Workers | Summarizer, classifier, extractor, translator, QA, reviewer — chain them immediately |
| LLM Steps | YAML-defined AI tasks with system prompts, JSON Schema contracts, tool-use |
| Processor Steps | Non-LLM tasks (PDF extraction, chunking, embedding) in the same pipeline |
| Document Processing | PDF/DOCX extraction via MarkItDown (fast) or Docling (deep OCR). Smart fallback. |
| Pipeline Orchestration | Chain steps with automatic dependency detection and parallelism |
| Three Model Tiers | Local (Ollama), Standard (Claude Sonnet), Frontier (Claude Opus) |
| Workshop | Web UI for testing, evaluating, and comparing step outputs |
| RAG Pipeline | Telegram channel ingestion, chunking, vector search (DuckDB or LanceDB) |
| MCP Gateway | Expose any workflow as an MCP server with a single YAML config |
| Config Wizard | loom setup auto-detects backends; loom new scaffolds workers/pipelines |
| Config Validation | loom validate checks configs without starting infrastructure |
| Live Monitoring | TUI dashboard, OpenTelemetry tracing, dead-letter inspection |
| Deployment | Docker Compose, Kubernetes manifests, mDNS discovery |
Documentation
Start here:
| Guide | Description |
|---|---|
| Concepts | How Loom works — the mental model in plain language |
| Getting Started | Install and run your first pipeline |
| Configuration | ~/.loom/config.yaml reference and priority chain |
| CLI Reference | All 19 commands with every flag and default |
| Workers Reference | 6 shipped workers with I/O schemas and examples |
Go deeper:
| Guide | Description |
|---|---|
| RAG Pipeline | Social media stream analysis end-to-end |
| Building Workflows | Custom steps, pipelines, tools, knowledge |
| Workshop | Web UI architecture and enhancement guide |
| Architecture | System design, message flow, NATS subjects |
| Design Invariants | Non-obvious design decisions (read before structural changes) |
| Troubleshooting | Common issues and solutions |
| Deployment | Local, Docker, and Kubernetes |
Current State
| Area | Status | Details |
|---|---|---|
| Core framework | Complete | Messages, contracts, config, workspace |
| LLM backends | Complete | Anthropic, Ollama, OpenAI-compatible |
| Workers & processors | Complete | Tool-use, knowledge silos, embeddings |
| Orchestration | Complete | Goal decomposition, pipelines, scheduling |
| RAG pipeline | Complete | Ingest, chunk, embed, search (DuckDB + LanceDB) |
| Workshop web UI | Complete | Test bench, eval runner, pipeline editor |
| MCP gateway | Complete | FastMCP 3.x, session tools, workshop tools |
| Tests | 1643 passing | 90% coverage, no infrastructure needed |
Get Involved
Use it. Start with uv run loom setup and go from there.
Contribute. New step types, contrib packages, test coverage, and docs are welcome. See Contributing.
Report issues. Bug reports with reproducible steps help the most.
AI-Assisted Development
This project uses Claude (Anthropic) as a development tool.
CLAUDE.md documents the architecture and design rules for
AI-assisted sessions. AI-generated code meets the same standards as human
contributions: typed messages, stateless workers, validated contracts, tests.
License
MPL 2.0 — Modified source files must remain open; unmodified files can be combined with proprietary code. Alternative licensing available for organizations with copyleft constraints. Contact: admin@irantransitionproject.org
For governance, succession, and contributor rights, see GOVERNANCE.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file loom_ai-0.9.0.tar.gz.
File metadata
- Download URL: loom_ai-0.9.0.tar.gz
- Upload date:
- Size: 801.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fe3ddeb0669688ca59f2398ab6ff86de83915a827e72b851cb1a5d34f2c78686
|
|
| MD5 |
2c37904c689cd8c8876cf6702badda60
|
|
| BLAKE2b-256 |
139a25857b96b1373efde49ffeafe66ea23c8f7b48d4be84e1f598dd20dafed1
|
Provenance
The following attestation bundles were made for loom_ai-0.9.0.tar.gz:
Publisher:
publish.yml on IranTransitionProject/loom
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
loom_ai-0.9.0.tar.gz -
Subject digest:
fe3ddeb0669688ca59f2398ab6ff86de83915a827e72b851cb1a5d34f2c78686 - Sigstore transparency entry: 1182557340
- Sigstore integration time:
-
Permalink:
IranTransitionProject/loom@b3b7446c71488fe9d78f73c696e42b2986b0b5c1 -
Branch / Tag:
refs/tags/v0.9.0 - Owner: https://github.com/IranTransitionProject
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b3b7446c71488fe9d78f73c696e42b2986b0b5c1 -
Trigger Event:
release
-
Statement type:
File details
Details for the file loom_ai-0.9.0-py3-none-any.whl.
File metadata
- Download URL: loom_ai-0.9.0-py3-none-any.whl
- Upload date:
- Size: 284.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a56cfb3d4c2973fb53833f88ae332494cd5c52b8c674cc9bf950fd2326955f5f
|
|
| MD5 |
f1a3166368dcff16e8818ae9eaf11776
|
|
| BLAKE2b-256 |
8ad9ef4367a3de69e374b95968eee37206fe8881e4cd44d71df2584a319e942b
|
Provenance
The following attestation bundles were made for loom_ai-0.9.0-py3-none-any.whl:
Publisher:
publish.yml on IranTransitionProject/loom
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
loom_ai-0.9.0-py3-none-any.whl -
Subject digest:
a56cfb3d4c2973fb53833f88ae332494cd5c52b8c674cc9bf950fd2326955f5f - Sigstore transparency entry: 1182557378
- Sigstore integration time:
-
Permalink:
IranTransitionProject/loom@b3b7446c71488fe9d78f73c696e42b2986b0b5c1 -
Branch / Tag:
refs/tags/v0.9.0 - Owner: https://github.com/IranTransitionProject
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@b3b7446c71488fe9d78f73c696e42b2986b0b5c1 -
Trigger Event:
release
-
Statement type: