Skip to main content

Lightweight Orchestrated Operational Mesh — Actor-based multi-LLM agent framework

Project description

Loom

CI Docs codecov Ruff License: MPL 2.0 Python 3.11+

Loom v0.9.0 Status: Active Development

Split complex AI work into focused steps. Test them individually. Chain them into workflows. Scale when you need to.


Try It in 60 Seconds

pip install loom-ai[rag]                              # install from PyPI
loom setup                                            # configure (auto-detects Ollama)
loom rag ingest /path/to/telegram/exports/*.json
loom rag search "earthquake damage reports"
loom rag serve                                        # open dashboard at localhost:8080

Or from source:

git clone https://github.com/IranTransitionProject/loom.git && cd loom
uv sync --extra rag
uv run loom setup

No servers to run. No configuration files to write. The setup wizard handles everything.


What Loom Does

Instead of one giant AI prompt that tries to do everything, Loom lets you break work into small, focused steps — each with a clear job, testable independently, and using the right model for the task.

  Document ──► Extract ──► Classify ──► Summarize ──► Report
                 │            │            │
                 │            │            └─ Claude Opus (complex reasoning)
                 │            └─ Ollama local (fast, free)
                 └─ Ollama local (fast, free)

Steps can run in parallel, use different AI models, and be tested with the built-in Workshop web UI — all without deploying any infrastructure.

When you're ready to scale, Loom adds a message bus (NATS) that connects everything for production use.


Who This Is For

Researchers and analysts — analyze social media streams, extract data from documents, build knowledge graphs. Start with loom rag and the Workshop dashboard. No infrastructure knowledge needed.

AI engineers — build multi-step LLM workflows with typed contracts, tool-use, knowledge injection, and pipeline orchestration. Test everything locally before deploying.

Platform teams — deploy to Kubernetes with rate limiting, model tier management, dead-letter handling, and OpenTelemetry tracing. Scale any component independently.


Three Ways to Use Loom

1. Command line (no setup)

Ingest data, search, and analyze — all from the terminal:

uv run loom rag ingest exports/*.json    # ingest Telegram channels
uv run loom rag search "protest reports"  # semantic search
uv run loom rag stats                     # store statistics

2. Build your own steps (guided)

Scaffold workers and pipelines interactively — YAML is generated for you:

uv run loom new worker                   # create a step from prompts
uv run loom new pipeline                 # chain steps into a workflow
uv run loom validate configs/workers/*.yaml  # check your configs
uv run loom workshop --port 8080         # test and evaluate in the web UI

Six ready-made workers ship with Loom: summarizer, classifier, extractor, translator, qa (question answering with source citations), and reviewer (quality review against configurable criteria).

3. Distributed infrastructure (production)

For teams, continuous processing, or high-throughput scenarios:

uv run loom router --nats-url nats://localhost:4222
uv run loom worker --config configs/workers/summarizer.yaml --tier local
uv run loom pipeline --config configs/orchestrators/my_pipeline.yaml
uv run loom submit "Analyze the quarterly reports"

Scale any component by running more copies — NATS load-balances automatically.


Key Features

Feature What It Does
6 Ready-Made Workers Summarizer, classifier, extractor, translator, QA, reviewer — chain them immediately
LLM Steps YAML-defined AI tasks with system prompts, JSON Schema contracts, tool-use
Processor Steps Non-LLM tasks (PDF extraction, chunking, embedding) in the same pipeline
Document Processing PDF/DOCX extraction via MarkItDown (fast) or Docling (deep OCR). Smart fallback.
Pipeline Orchestration Chain steps with automatic dependency detection and parallelism
Three Model Tiers Local (Ollama), Standard (Claude Sonnet), Frontier (Claude Opus)
Workshop Web UI for testing, evaluating, and comparing step outputs
RAG Pipeline Telegram channel ingestion, chunking, vector search (DuckDB or LanceDB)
MCP Gateway Expose any workflow as an MCP server with a single YAML config
Config Wizard loom setup auto-detects backends; loom new scaffolds workers/pipelines
Config Validation loom validate checks configs without starting infrastructure
Live Monitoring TUI dashboard, OpenTelemetry tracing, dead-letter inspection
Deployment Docker Compose, Kubernetes manifests, mDNS discovery

Documentation

Start here:

Guide Description
Concepts How Loom works — the mental model in plain language
Getting Started Install and run your first pipeline
Configuration ~/.loom/config.yaml reference and priority chain
CLI Reference All 19 commands with every flag and default
Workers Reference 6 shipped workers with I/O schemas and examples

Go deeper:

Guide Description
RAG Pipeline Social media stream analysis end-to-end
Building Workflows Custom steps, pipelines, tools, knowledge
Workshop Web UI architecture and enhancement guide
Architecture System design, message flow, NATS subjects
Design Invariants Non-obvious design decisions (read before structural changes)
Troubleshooting Common issues and solutions
Deployment Local, Docker, and Kubernetes

Current State

Area Status Details
Core framework Complete Messages, contracts, config, workspace
LLM backends Complete Anthropic, Ollama, OpenAI-compatible
Workers & processors Complete Tool-use, knowledge silos, embeddings
Orchestration Complete Goal decomposition, pipelines, scheduling
RAG pipeline Complete Ingest, chunk, embed, search (DuckDB + LanceDB)
Workshop web UI Complete Test bench, eval runner, pipeline editor
MCP gateway Complete FastMCP 3.x, session tools, workshop tools
Tests 1643 passing 90% coverage, no infrastructure needed

Get Involved

Use it. Start with uv run loom setup and go from there.

Contribute. New step types, contrib packages, test coverage, and docs are welcome. See Contributing.

Report issues. Bug reports with reproducible steps help the most.


AI-Assisted Development

This project uses Claude (Anthropic) as a development tool. CLAUDE.md documents the architecture and design rules for AI-assisted sessions. AI-generated code meets the same standards as human contributions: typed messages, stateless workers, validated contracts, tests.


License

MPL 2.0 — Modified source files must remain open; unmodified files can be combined with proprietary code. Alternative licensing available for organizations with copyleft constraints. Contact: admin@irantransitionproject.org

For governance, succession, and contributor rights, see GOVERNANCE.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

loom_ai-0.9.0.tar.gz (801.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

loom_ai-0.9.0-py3-none-any.whl (284.7 kB view details)

Uploaded Python 3

File details

Details for the file loom_ai-0.9.0.tar.gz.

File metadata

  • Download URL: loom_ai-0.9.0.tar.gz
  • Upload date:
  • Size: 801.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for loom_ai-0.9.0.tar.gz
Algorithm Hash digest
SHA256 fe3ddeb0669688ca59f2398ab6ff86de83915a827e72b851cb1a5d34f2c78686
MD5 2c37904c689cd8c8876cf6702badda60
BLAKE2b-256 139a25857b96b1373efde49ffeafe66ea23c8f7b48d4be84e1f598dd20dafed1

See more details on using hashes here.

Provenance

The following attestation bundles were made for loom_ai-0.9.0.tar.gz:

Publisher: publish.yml on IranTransitionProject/loom

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file loom_ai-0.9.0-py3-none-any.whl.

File metadata

  • Download URL: loom_ai-0.9.0-py3-none-any.whl
  • Upload date:
  • Size: 284.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for loom_ai-0.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a56cfb3d4c2973fb53833f88ae332494cd5c52b8c674cc9bf950fd2326955f5f
MD5 f1a3166368dcff16e8818ae9eaf11776
BLAKE2b-256 8ad9ef4367a3de69e374b95968eee37206fe8881e4cd44d71df2584a319e942b

See more details on using hashes here.

Provenance

The following attestation bundles were made for loom_ai-0.9.0-py3-none-any.whl:

Publisher: publish.yml on IranTransitionProject/loom

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page