Skip to main content

The intelligent fabric for AI agents — 9 execution patterns, auto-classified, self-learning, production-ready. Drop-in replacement for LangChain's create_agent.

Project description

MEDHIRA

agloom

The intelligent fabric for AI agents.

Nine execution patterns. Auto-classified. Self-learning. One API.
Drop-in replacement for LangChain's create_agent — with superpowers.


PyPI version Python 3.11+ License Docs

Documentation · PyPI · Examples · Issues


You write this:

agent = create_agent(model=llm, tools=[search, calculate], name="analyst")
result = await agent.ainvoke("Analyze Q3 sales across 3 regions and recommend strategy")

agloom does this:

1. Classifies query         → SUPERVISOR (multi-faceted, parallelizable)
2. Decomposes into 3 tasks  → [Region A, Region B, Region C]
3. Spawns parallel workers   → 3 LLM calls running concurrently
4. Synthesizes results       → Unified strategy recommendation
5. Learns the pattern        → Saved as reusable skill for next time
6. Auto-evaluates quality    → Scored, tracked, trend-detected

Total code you wrote: 2 lines. Everything else — classification, routing, parallelism, synthesis, learning, evaluation — is handled automatically.




Why Teams Choose agloom

Without agloom

# Decide pattern manually per query type
# Build custom routing logic
# Wire up memory yourself
# Implement retry/timeout logic
# Build feedback pipeline
# Add streaming support
# Handle concurrent workers
# Track token costs
# Set up circuit breakers
# ...weeks of infrastructure work

With agloom

from agloom import create_agent

agent = create_agent(
    model=llm,
    tools=[search, calculate],
    name="analyst",
)

result = await agent.ainvoke("Your query here")
# That's it. Everything else is automatic.

The Real Cost of Multi-Agent Systems

Building a single agent is manageable. Building a multi-agent system — where agents coordinate, delegate, run in parallel, share state, handle failures independently, and synthesize results — is where projects stall for weeks.

Here's what a production multi-agent pipeline actually requires:

# ❌ What you'd build yourself for a multi-agent research pipeline

# 1. Define a supervisor agent that decomposes queries
supervisor_prompt = """You are a research supervisor. Break the query
into subtasks and assign each to a specialist worker..."""
supervisor_chain = prompt | llm | JsonOutputParser()

# 2. Define individual worker agents (each with their own tools, prompts, memory)
researcher = create_react_agent(llm, [search_tool], researcher_prompt)
analyst = create_react_agent(llm, [calc_tool], analyst_prompt)
writer = create_react_agent(llm, [format_tool], writer_prompt)

# 3. Build a state graph for orchestration
class SupervisorState(TypedDict):
    messages: list
    subtasks: list
    worker_results: dict
    final_output: str

graph = StateGraph(SupervisorState)
graph.add_node("supervisor", supervisor_node)
graph.add_node("researcher", researcher_node)
graph.add_node("analyst", analyst_node)
graph.add_node("writer", writer_node)
graph.add_node("synthesizer", synthesizer_node)

# 4. Define routing logic
graph.add_conditional_edges("supervisor", route_to_workers)
graph.add_edge("researcher", "synthesizer")
graph.add_edge("analyst", "synthesizer")
graph.add_edge("writer", "synthesizer")

# 5. Handle parallel execution
async def run_workers(state):
    tasks = [run_worker(w, state) for w in state["subtasks"]]
    results = await asyncio.gather(*tasks, return_exceptions=True)
    # Handle partial failures...
    # Retry failed workers...
    # Respect rate limits...
    # Track token usage per worker...

# 6. Add error handling, timeouts, retries per worker
# 7. Wire up memory sharing between workers
# 8. Add streaming from each worker
# 9. Build synthesis logic to merge parallel results
# 10. Track which pattern works best for which query type
# ...easily 300+ lines before it's production-ready

Now here's the same thing with agloom:

# ✅ With agloom — same result, zero orchestration code

agent = create_agent(
    model=llm,
    tools=[search_tool, calc_tool, format_tool],
    name="research-team",
)

result = await agent.ainvoke("Research renewable energy trends, analyze the economics, and write a summary")
# agloom auto-selects SUPERVISOR, spawns parallel workers,
# synthesizes results, tracks tokens, and learns the pattern.

300+ lines of orchestration code → 3 lines. The supervisor logic, worker management, parallel execution, failure handling, result synthesis, and pattern learning are all handled internally. You focus on what your agent should do. agloom figures out how.


What You Get Out of the Box

Capability What it means for you
9 Execution Patterns DIRECT, REACT, SUPERVISOR, PIPELINE, PLANNER_EXECUTOR, REFLECTION, SWARM, BLACKBOARD, HYBRID_DAG — auto-selected per query
Zero-Config Classification Your agent picks the right strategy for every query. No if-else routing. No manual pattern selection
Skill Learning Agents remember what worked. Next time a similar query arrives, they already know the approach
Auto-Evaluation Every response is scored. Quality degrades? agloom detects the trend and adjusts
Memory Session memory (always on) + long-term memory + passive injection. Pass thread_id for sessions, store= for persistence
Streaming Real-time token streaming + structured events in a single API. Build ChatGPT-style "thinking" UIs with tool call tracking
Step Tracing Full audit trail: classify → tool call → worker → synthesis. Every step timed and logged
Token Tracking Know exactly how many tokens each query costs. Across all LLM calls, aggregated
Human-in-the-Loop 4 levels of control: pause before patterns, tools, workers, or send runtime signals
Task Delegation 4 patterns: as_tool(), transparent hand-off, hierarchical delegates=[], background adelegate_background(). Agents delegate to agents
Frozen Agents Batch mode: classify once, execute thousands. Save ~300ms per call
Production Guards Circuit breaker, rate limiter, configurable timeouts, retries, concurrency limits — built in
LangSmith Auto-detected. Set the env var, see every trace. No code changes
MCP Support Connect to Model Context Protocol servers for external tool discovery

Get Started in 60 Seconds

Install

pip install agloom          # or: uv add agloom
pip install agloom[groq]    # with Groq provider
pip install agloom[all]     # all providers

Run

import asyncio
from langchain_groq import ChatGroq
from agloom import create_agent

async def main():
    llm = ChatGroq(model="meta-llama/llama-4-scout-17b-16e-instruct")
    agent = create_agent(model=llm, name="my-first-agent")

    result = await agent.ainvoke("What causes auroras?")
    print(result.output)
    print(f"Pattern: {result.pattern_used.value}")   # → DIRECT
    print(f"Steps:   {len(result.steps)}")            # → 2
    print(f"Tokens:  {result.token_usage}")           # → {input: 48, output: 256}

asyncio.run(main())

That's 7 lines to a production-grade agent with auto-classification, step tracing, and token tracking.

Conversation Memory

Session memory is always active. Pass thread_id to maintain context across calls:

# Same thread_id = agent remembers previous turns
result = await agent.ainvoke("My name is Alice", thread_id="session-1")
result = await agent.ainvoke("What's my name?", thread_id="session-1")
# → "Your name is Alice"

Streaming — Because No One Likes Loading Spinners

# Token streaming — users see the response being typed
async for token in agent.astream("Explain quantum computing"):
    print(token, end="", flush=True)
# Event streaming — build ChatGPT-style "thinking" UIs
async for event in agent.astream_events("Research renewable energy"):
    if event.type == "thinking":
        show_spinner(f"Analyzing query...")
    elif event.type == "token":
        print(event.data["content"], end="", flush=True)  # real-time tokens
    elif event.type == "tool_call":
        show_step(f"Calling {event.data['name']} [{event.data.get('id', '')}]...")
    elif event.type == "tool_result":
        show_step(f"Result [{event.data.get('id', '')}]: {event.data['output'][:50]}")
    elif event.type == "worker_end":
        show_step(f"Worker finished: {event.data['name']}")
    elif event.type == "done":
        show_result(event.data["result"]["output"])

Battle-Tested Reliability

agent = create_agent(
    model=llm,
    tools=[...],
    name="production-agent",

    # Concurrency
    max_concurrent=8,           # 8 parallel workers
    rate_limit=10.0,            # max 10 LLM calls/sec

    # Resilience
    max_retries=3,              # retry failed workers
    llm_timeout=60.0,           # 60s timeout per LLM call
    # + built-in circuit breaker (automatic)

    # Memory (session memory is auto-created; store enables long-term features)
    store=InMemoryStore(),      # long-term memory + skills + feedback

    # Quality
    feedback_handler=LTSFeedbackHandler(),  # auto-eval + user feedback
)

Every parameter has a sensible default. Start with create_agent(model=llm) and add what you need.


Who Is This For?

Role Why you'll care
Developers Stop writing agent infrastructure. create_agent gives you 9 patterns, memory, streaming, and production guards in one function call
Tech Leads Standardize your team's agent architecture. One API, consistent behavior, built-in observability
Product Managers Ship agent features faster. What took weeks of custom plumbing now takes one parameter
AI Engineers Focus on prompts and tools, not routing logic. agloom handles the orchestration

Documentation

Everything you need at agloom.readthedocs.io:

Guide What you'll learn
Why agloom? The 6 problems every agent builder faces and how we solve them
Quick Start First agent in 5 lines of code
Execution Patterns All 9 patterns with diagrams and examples
All Parameters Every create_agent parameter explained
Streaming & Events Build responsive UIs with streaming APIs
Middleware Transform queries and results with hooks
MCP Servers Connect to external tool servers
Task Delegation 4 patterns for agent-to-agent delegation
Production Guide FastAPI, Docker, testing, multi-tenancy, structured output
Errors & Warnings Every error message, what causes it, how to fix it
LangSmith Integration Zero-config tracing and observability

Requirements

  • Python 3.11+
  • LLM API key — Groq, OpenAI, NVIDIA, HuggingFace, or any LangChain-compatible provider

Contributing

We welcome contributions. See CONTRIBUTING.md for setup and guidelines.


License

Apache 2.0 — use it freely in personal and commercial projects.


MEDHIRA

Built with care by MEDHIRA

hello.medhira@gmail.com · GitHub · PyPI

Founded by S Muni Harish

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agloom-0.1.2.tar.gz (9.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agloom-0.1.2-py3-none-any.whl (144.1 kB view details)

Uploaded Python 3

File details

Details for the file agloom-0.1.2.tar.gz.

File metadata

  • Download URL: agloom-0.1.2.tar.gz
  • Upload date:
  • Size: 9.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for agloom-0.1.2.tar.gz
Algorithm Hash digest
SHA256 d1f1affd53aa4c14b2d8966575034fbf2186573fe50567f40d23acb3308f932f
MD5 f0dbda2d5f0b0203939e968c6f883603
BLAKE2b-256 0f510f4bc803970d763e4b95c3d40392ceeba576944910540759af523936edaa

See more details on using hashes here.

File details

Details for the file agloom-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: agloom-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 144.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.6 {"installer":{"name":"uv","version":"0.11.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for agloom-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 b7e93bd4674c2207504399ca1bff544df93768dc4ad3108f2cf8c2ef85c87cc6
MD5 d817dc99fec5bb37c69faea9770f71e6
BLAKE2b-256 b858a8f38b98269adcadac11ab01dda0fbd9e6ca75f796fa586307f56f79fe62

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page