Minimal self-organizing multi-agent framework
Project description
loom-agent
Long Horizon Agents Framework Agents that don't collapse when problems get long.
English | 中文
A short story
We built many agents.
They could write code. They could plan tasks. They could use tools.
And they all failed the same way.
Not because they were stupid. Not because they lacked tools.
They failed because they were rigid.
When a task got harder, they couldn't split it. When a subtask failed, they couldn't adapt. When the environment changed, they couldn't sense it.
We looked at biology for answers.
An amoeba is one of the simplest organisms on Earth. Yet it can sense, move, split, and adapt — without a brain.
It doesn't plan. It responds. It doesn't command. It self-organizes.
That was the moment we realized:
The problem wasn't intelligence. It was the lack of a living mechanism.
The Amoba Mechanism
Real-world tasks are not prompts.
They shift, branch, fail, and evolve. A coding task spawns debugging. A research task splits into sub-questions. A failed attempt demands a different approach.
Most agent frameworks are static pipelines. Fixed plans, fixed agents, fixed flows. When reality deviates, they break.
Biology solved this billions of years ago.
An amoeba senses its environment, matches the best response, scales by splitting when needed, executes, evaluates the outcome, and adapts for next time.
loom-agent's AmoebaLoop works the same way:
SENSE → MATCH → SCALE → EXECUTE → EVALUATE → ADAPT
- SENSE — Analyze task complexity and detect domains
- MATCH — Auction across capable agents, evolve new skills if needed
- SCALE — Split complex tasks via mitosis, create child agents
- EXECUTE — Run with enriched context and token budgets
- EVALUATE — Score results, update capability via EMA rewards
- ADAPT — Recycle unhealthy agents (apoptosis), calibrate complexity estimates, evolve skills
Agents that perform well get stronger. Agents that fail get recycled. New specialists emerge on demand. The system lives.
loom-agent: Structure + Life
A loom creates fabric through structure — threads interweave, patterns repeat, tension stays balanced.
An amoeba creates life through adaptation — sensing, splitting, evolving, recycling.
loom-agent combines both.
The framework is the loom — composable modules that weave agents together. The AmoebaLoop is the life — a self-organizing cycle that makes agents breathe.
Structure (Loom) → Agent · Memory · Tools · Events · Interceptors · Context · Skills
Life (Amoba) → Sense · Match · Scale · Execute · Evaluate · Adapt
Complexity grows, structure doesn't. Agents adapt, the framework holds.
Core Principles
Self-organizing over orchestrating — No central controller. Agents sense, bid, and adapt autonomously through the AmoebaLoop.
Composition over inheritance — Agent = provider + memory + tools + context + events + interceptors. Add only what you need.
Mitosis over monoliths — Complex tasks split into subtasks, spawning child agents. Simple tasks run directly.
Reward over rules — Capability scores update via EMA after every execution. Good agents get stronger; bad agents get recycled.
Use Cases
loom-agent is not a prompt collection, not a tool orchestration wrapper, not a workflow engine.
It's designed for systems that need to remain stable over time:
Long-running autonomous workflows · Research agents · Multi-day task execution · Complex RAG systems · Agent-based SaaS backends · AI operators and copilots
Installation
pip install loom-agent
Quick Start
import asyncio
from loom import Agent, AgentConfig
from loom.providers.openai import OpenAIProvider
provider = OpenAIProvider(AgentConfig(
api_key="sk-...",
model="gpt-4o-mini",
))
agent = Agent(
provider=provider,
config=AgentConfig(system_prompt="You are a helpful assistant.", max_steps=3),
)
async def main():
result = await agent.run("Introduce Python in one sentence.")
print(result.content)
asyncio.run(main())
Streaming
from loom import TextDeltaEvent, DoneEvent
async for event in agent.stream("Introduce Rust in one sentence."):
if isinstance(event, TextDeltaEvent):
print(event.text, end="", flush=True)
elif isinstance(event, DoneEvent):
print(f"\nDone, steps={event.steps}")
Tools
from pydantic import BaseModel
from loom import ToolRegistry, define_tool, ToolContext
class CalcParams(BaseModel):
expression: str
async def calc_fn(params: CalcParams, ctx: ToolContext) -> str:
return str(eval(params.expression))
tools = ToolRegistry()
tools.register(define_tool("calc", "Evaluate math expression", CalcParams, calc_fn))
agent = Agent(provider=provider, config=AgentConfig(max_steps=5), tools=tools)
result = await agent.run("What is 2**20?")
Multi-Agent Delegation
from loom import EventBus
bus = EventBus(node_id="root")
researcher = Agent(
provider=provider, name="researcher",
config=AgentConfig(system_prompt="You are a researcher.", max_steps=2),
event_bus=bus.create_child("researcher"),
)
writer = Agent(
provider=provider, name="writer",
config=AgentConfig(system_prompt="You are a writer.", max_steps=2),
event_bus=bus.create_child("writer"),
)
r1 = await researcher.run("Research AI memory systems")
r2 = await writer.run("Write a technical article")
See all 15 demos in examples/demo/.
What's New in v0.6.6
Harness Engineering Optimizations
Production-grade reliability and efficiency improvements based on Harness Engineering principles:
Constraint System (P0)
- Pre-execution constraint validation with tool whitelisting
- Resource quota guards (token/time limits)
- Violation tracking for audit trails
Feedback Loop (P1)
- Step-level instant rewards (no wait for task completion)
- Adaptive skill crystallization with dynamic thresholds
- Online learning mode (EVALUATE-ADAPT fusion)
Energy Efficiency (P2)
- LRU-cached token counting (10x faster)
- Incremental history building (90% reduction)
- Batch embedding calls (90% cost reduction)
See HARNESS_OPTIMIZATION.md for details.
What's New in v0.6.4
Blueprint Forge — Autonomous Agent Creation
When no existing agent can handle a task, the cluster now auto-designs a specialized agent via LLM. Blueprints carry tailored system_prompt, filtered tools, and domain scores. They evolve through reward signals and get pruned when underperforming.
from loom.cluster.blueprint_forge import BlueprintForge
from loom.cluster.blueprint_store import BlueprintStore
store = BlueprintStore(persist_path=Path("blueprints.json"))
forge = BlueprintForge(llm=provider, store=store)
# LLM designs a specialist blueprint → spawns an agent
blueprint = await forge.forge(task)
node = forge.spawn(blueprint, parent_node)
result = await node.agent.run("Analyze this dataset")
See Blueprint Forge for the full lifecycle.
ToolContext Extension — Dynamic Metadata Access
Tools can now receive arbitrary context via AgentConfig.tool_context. Access custom fields directly as attributes on ToolContext:
agent = Agent(
provider=provider,
config=AgentConfig(
tool_context={"documentContext": ["block-A", "block-B"]},
),
tools=registry,
)
# Inside your tool function:
async def my_tool(params, ctx: ToolContext) -> str:
docs = ctx.documentContext # attribute-style access via metadata
Thinking Model Support
Full support for reasoning/thinking models (DeepSeek, QwQ, etc.) across all providers. The reasoning_content field is captured in both streaming and non-streaming modes, exposed via CompletionResult.reasoning and ReasoningDeltaEvent.
Core Features
Composition-Based Architecture
Agent is assembled from orthogonal modules — provider, memory, tools, context, event bus, interceptors, skills. Add only what you need; every module is optional.
Three-Layer Memory
L1 SlidingWindow (recent turns) → L2 WorkingMemory (key facts) → L3 PersistentStore (long-term). Token-budget driven, automatic compaction.
Tool System
define_tool + ToolRegistry with Pydantic schema validation. LLM autonomously decides when to call tools via ReAct loop.
EventBus
Parent-child event propagation, pattern matching, typed events. All agent lifecycle events flow through the bus.
Interceptor Chain
Middleware pipeline that transforms messages before/after LLM calls. Audit logging, content filtering, prompt injection — all as interceptors.
Knowledge Base (RAG)
Document ingestion, chunking, embedding, hybrid retrieval (keyword + vector RRF fusion). Bridges to Agent via KnowledgeProvider → ContextOrchestrator.
Context Orchestration
Multi-source context collection with adaptive budget allocation. Memory, knowledge, and custom providers unified under ContextOrchestrator.
Skill System
Keyword / pattern / semantic triggers auto-activate domain-specific skills, dynamically injecting expert instructions into the agent.
Cluster Auction
Capability-scored agent nodes bid on tasks. RewardBus updates scores via EMA after each execution. LifecycleManager monitors health.
Resilient Providers
BaseLLMProvider with exponential-backoff retry + circuit breaker. Any OpenAI-compatible API works via OpenAIProvider.
Runtime & AmoebaLoop
Runtime orchestrates cluster-level task submission. AmoebaLoop implements a 6-phase self-organizing cycle: SENSE → MATCH → SCALE → EXECUTE → EVALUATE → ADAPT.
Harness Engineering Features
Production-grade reliability and efficiency built-in:
- Constraint Validation — Pre-execution checks with tool whitelisting and resource quotas
- Instant Feedback — Step-level rewards and online learning for 4x faster adaptation
- Smart Caching — LRU token counting and incremental history building for 10x performance
- Batch Operations — Batch embedding calls reduce API costs by 90%
Documentation
See the Wiki for detailed documentation:
| Document | Description | Demo |
|---|---|---|
| Agent | Agent core, AgentConfig, run/stream | 01 |
| Tools | define_tool, ToolRegistry, ToolContext | 02 |
| Events | EventBus, parent-child propagation | 03 |
| Interceptors | InterceptorChain, middleware pipeline | 04 |
| Memory | L1/L2/L3 three-layer memory | 05 |
| Knowledge | KnowledgeBase, RAG hybrid retrieval | 06 |
| Context | ContextOrchestrator, multi-source | 07 |
| Skills | SkillRegistry, trigger-based activation | 08 |
| Cluster | ClusterManager, auction, RewardBus | 09-10 |
| Blueprint | BlueprintForge, autonomous agent creation | — |
| Providers | BaseLLMProvider, retry, circuit breaker, thinking models | 11 |
| Runtime | Runtime, AmoebaLoop 6-phase cycle | 12-13 |
| Architecture | Full-stack pipeline, delegation, architecture diagram | 14-15 |
Project Status
Current version: v0.6.4.
APIs may evolve rapidly.
Structure will not.
Philosophy
Structure holds the shape. Life fills the shape with motion.
Community & Contact
Join the Loom developer community to discuss the next generation of Agent architecture.
License
Apache License 2.0 with Commons Clause.
Free for academic research, personal use, and internal commercial use. Commercial sale is strictly prohibited (including but not limited to paid packaging, hosting services, etc.) without authorization. See LICENSE for details.
Welcome to living agents.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file loom_agent-0.7.0.tar.gz.
File metadata
- Download URL: loom_agent-0.7.0.tar.gz
- Upload date:
- Size: 82.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.11.15 Linux/6.14.0-1017-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6ed739d25e4553e03ecf58cd0a63a01b023e46877740098d9a4e9fcb419c0019
|
|
| MD5 |
db4512c429c310350309b5ab5dbefc9f
|
|
| BLAKE2b-256 |
62ebc7edebe7da786888595bdffb910740da351b2c4f0543edce2cd412666e69
|
File details
Details for the file loom_agent-0.7.0-py3-none-any.whl.
File metadata
- Download URL: loom_agent-0.7.0-py3-none-any.whl
- Upload date:
- Size: 116.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/2.3.2 CPython/3.11.15 Linux/6.14.0-1017-azure
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
41f82834e454ed13a58082be90540acbe368a0deda95eaa7b2c874b9ec6dd803
|
|
| MD5 |
c2d7d4ac945a32b881181684cd1c8e3f
|
|
| BLAKE2b-256 |
3aa687844adbe0aa3edac2b1001907714392be2faaf00d4673bd675f61a4073a
|