Skip to main content

Declarative middleware ordering via slug-based DAG resolution for LangChain and beyond

Project description

langchain-middleware-stack

langchain-middleware-stack banner

Declarative middleware ordering for LangChain Deep Agents using stable slug-based DAG resolution.

PyPI · License

Docs: Deep Agents middleware · Community issue draft

After you enable GitHub Pages from the /docs folder, add your site URL to docs/index.html (repository link) and to pyproject.toml under [project.urls].

Why

In LangChain Deep Agents, middleware is a first-class control layer over model calls, tools, and state. The framework still composes it as a positional middleware=[...] list on create_agent. In that model, ordering is semantics: the first entry is the outermost wrapper (for example around wrap_model_call), so reordering changes retries, timeouts, logging, and policy in non-obvious ways.

The underlying issue is not middleware itself — it is that composition is positional instead of declarative. That breaks down in production because:

  • Fragility — Inserting or reordering entries changes behavior; constraints like “after logging, before retry” are not declared on the middleware type.
  • Poor composability — Separate teams or packages cannot merge contributions without one owner of the final list and implicit coordination.
  • Hidden coupling — Dependencies are expressed as indices, not as explicit, reviewable constraints.
  • No validated guarantees — Ordering invariants and dependency relationships are not enforced before runtime.

**langchain-middleware-stack** addresses this with constraint-based composition (DAG + topological sort, stable Kahn tie-break). You declare intent with four primitives:

Primitive Role
slug Stable, unique identity for each middleware
after / before Self-declared ordering constraints
MiddlewareStack Topological resolver (Kahn's algorithm, stable tie-break)
wires Cross-middleware attribute injection at resolve-time

Installation

pip install langchain-middleware-stack

Zero runtime dependencies. Python ≥ 3.9.

Demo notebook

[notebooks/deep-agents-middleware.ipynb](notebooks/deep-agents-middleware.ipynb) walks through baseline vs improved, both using a real [ChatOpenAI](https://python.langchain.com/docs/integrations/chat/openai/) model (OPENAI_API_KEY required for those cells):

Baseline [create_agent](https://reference.langchain.com/python/langchain/agents/create_agent) with a manually ordered middleware=[...] list.
Improved The same middleware types added to a MiddlewareStack in scrambled order; resolve() produces the LangChain list (outermost first), then create_agent uses that list.

Appendices at the end: an offline wrap(handler) toy stack and optional LangChain FakeListChatModel — not the main teaching path.

make notebook   # from a dev setup with `make setup`

Quick start

from langchain_middleware_stack import MiddlewareStack
from langchain_middleware_stack.middleware import LoggingMiddleware, RetryMiddleware

stack = MiddlewareStack()
stack.add([RetryMiddleware(max_retries=3), LoggingMiddleware()])
# or: stack.add(RetryMiddleware(...)).add(LoggingMiddleware())
ordered = stack.resolve()
# -> [LoggingMiddleware, RetryMiddleware]
# retry.after=("logging",) reordered — every retry is logged

Writing your own middleware

from typing import ClassVar
from langchain_middleware_stack import BaseMiddleware

class TracingMiddleware(BaseMiddleware):
    slug: ClassVar[str] = "tracing"
    after: ClassVar[tuple[str, ...]] = ("logging",)  # run after logging

    def wrap(self, handler, *args, **kwargs):
        with tracer.start_span(handler.__name__):
            return handler(*args, **kwargs)

    async def awrap(self, handler, *args, **kwargs):
        with tracer.start_span(handler.__name__):
            return await handler(*args, **kwargs)

BaseMiddleware is a mixin — use it with LangChain's AgentMiddleware when you pass middleware into create_agent. Subclasses must implement the agent hooks you need (typically wrap_model_call); declare tools (often ()) on the class.

from typing import ClassVar

from langchain.agents.middleware import AgentMiddleware
from langchain_middleware_stack import BaseMiddleware

class MyMiddleware(AgentMiddleware, BaseMiddleware):
    slug: ClassVar[str] = "my-middleware"
    tools: ClassVar[tuple] = ()

    def wrap_model_call(self, request, handler):
        # intercept model calls; delegate with handler(request)
        return handler(request)

The notebook uses this pattern end-to-end for the baseline and improved scenarios.

Cross-middleware wiring

wires injects an attribute from a resolved upstream middleware into your middleware at stack build time:

class ConsumerMiddleware(BaseMiddleware):
    slug: ClassVar[str] = "consumer"
    after: ClassVar[tuple[str, ...]] = ("provider",)
    wires: ClassVar[dict[str, tuple[str, str]]] = {
        "_shared_fn": ("provider", "exported_fn")
    }
    # _shared_fn is injected from provider.exported_fn after resolve()

Error reference

Exception Raised when
MiddlewareResolutionError Base class for all stack build errors
MiddlewareCycleError Dependency graph contains a cycle
MiddlewareDuplicateSlugError Two middleware share the same slug
MiddlewareWiringError Cross-middleware wiring fails
RetryExhaustedError RetryMiddleware runs out of attempts

LangChain upstream and related initiatives

Pointers to declarative ordering and composition for agent middleware—the same problem this package targets (not exhaustive):

Link Kind Topic
#33885 issue (langchain-ai/langchain) Middleware declaring dependencies; tracked as a possible 1.x evolution
#2126 issue (langchain-ai/deepagents) Less rigid middleware composition at the Deep Agents entry point
#34514 PR (langchain-ai/langchain, open) depends_on on middleware classes and topological ordering inside create_agent

This package is a standalone resolver you can use with the current harness; how much overlaps with #34514 if it merges is an integration detail for later. A maintainer-facing draft lives in docs/github-issue-langchain-community.md (fill before opening a tracking issue).

License

Apache-2.0

Author

João Gabriel LimaLinkedIn · joaogabriellima.eng@gmail.com · jambu.ai

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_middleware_stack-0.1.2.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_middleware_stack-0.1.2-py3-none-any.whl (19.4 kB view details)

Uploaded Python 3

File details

Details for the file langchain_middleware_stack-0.1.2.tar.gz.

File metadata

File hashes

Hashes for langchain_middleware_stack-0.1.2.tar.gz
Algorithm Hash digest
SHA256 7d140b6289e3edf560e4eebafe0225afe149b8890183c4c0263b635f9186b971
MD5 a79fb38c8502f82ad3a936021693c4ce
BLAKE2b-256 d7175b1d1a922a72c9eb66ae6024f4bfca459ee37440728a1ba3625a9244b066

See more details on using hashes here.

File details

Details for the file langchain_middleware_stack-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_middleware_stack-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3749fd789bb345873f42fbe9616d085087587ff353b5f6b9be4e31792006a3ff
MD5 86f7e6b8af2e71f618d14120febf0ef7
BLAKE2b-256 25db85ff9ee298c2f40069220bb75ea2da801263bf1e8d4ba7569b9ba7731809

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page