Skip to main content

LangGraph adapter for eager tool calling — middleware for langchain.agents.create_agent.

Project description

eager-tools-langgraph

LangGraph adapter for eager-tools — a one-line AgentMiddleware that overlaps tool execution with model streaming inside langchain.agents.create_agent.

The middleware owns the model's astream(...) loop, watches tool_call_chunks fly past, and dispatches each idempotent tool the moment its JSON block seals — not after message_stop. Returns a ModelResponse(result=[AIMessage, ToolMessage₁, …]) so the agent commits the assistant message and eagerly-resolved tool results in a single graph step.

See the parent repo README.md for the eager-dispatch benchmark headline (1.20× – 1.50× over classic parallel dispatch across 16 workloads).


Install

pip install eager-tools-core eager-tools-langgraph langchain-anthropic
# (or langchain-openai — the middleware is provider-agnostic)

60-second quickstart

import asyncio
from eager_tools_langgraph import eager_middleware
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool


@tool
def read_file(path: str) -> str:
    """Read a file."""
    return open(path).read()


# eager-tools needs a Tool-protocol object: name, idempotent flag, async __call__.
class ReadFileEager:
    name = "read_file"
    idempotent = True  # safe to fire mid-stream
    async def __call__(self, args: dict) -> str:
        return open(args["path"]).read()


async def main() -> None:
    agent = create_agent(
        model=ChatAnthropic(model_name="claude-sonnet-4-5", timeout=60.0, stop=None),
        tools=[read_file],
        middleware=[eager_middleware({"read_file": ReadFileEager()})],
    )
    result = await agent.ainvoke({"messages": [HumanMessage("read README.md and summarize")]})
    print(result["messages"][-1].content)


asyncio.run(main())

Runnable variants:

OpenAI-compatible gateways (OpenRouter, vLLM, etc.): with ChatOpenAI(base_url=…), pre-bind tools via model.bind_tools([…]) before passing to create_agent — the agent's implicit binding doesn't always reach the underlying request. Examples 07 and 08 do this; example 06 doesn't need to (langchain-anthropic binds correctly).


Version matrix

Package Tested floor Why
langgraph >=1.1.6 Middleware seam stable; subgraph stream_mode bug land.
langchain >=1.0 langchain.agents.create_agent is the entry point.
langchain-core >=1.2.14 PR #35281 fixed parallel tool_call_chunks merge.
python >=3.11 Matches eager-tools-core.

langgraph 1.1.6's own pyproject only floors langchain-core>=1.0.0, but a real bug in tool_call_chunks merging was fixed at 1.2.14. We pin tighter ourselves so the middleware doesn't silently lose parallel calls.

Provider coverage

The middleware is provider-agnostic: any BaseChatModel whose astream(...) yields LangChain AIMessageChunk with tool_call_chunks works. That covers langchain-anthropic, langchain-openai, and most community providers.

No per-provider chunk normalizer needed — LangChain has already done that work.


Honest limits

  1. create_agent only. Raw StateGraph with a custom model node is out of scope this version — the awrap_model_call middleware seam is bound to create_agent. If there's demand, a StateGraph-friendly helper lands in v0.2.1.

  2. Per-agent middleware. Subgraphs need the middleware re-registered on every create_agent instance. The framework does not auto-propagate it.

  3. add_messages ordering. The middleware commits [AIMessage, ToolMessage…] together. If you have a custom message reducer that re-sorts by timestamp, ordering is undefined.

  4. Streamed token observability. Because the middleware owns the astream(...) loop, downstream callers using stream_mode="messages" on the agent won't see token deltas from the wrapped node. Token-by-token UIs need to plumb chunks through get_stream_writer() as a custom event — open an issue if you need this baked in.

  5. Provider chunk shape variance. The adapter trusts LangChain's normalized tool_call_chunks shape. Spec-compliant providers Just Work; for known upstream issues — GPT-5 content+tool_calls interleaving (langchain-ai/langchain#6510), Gemini parallel call drops (#10196) — the fix has to land upstream first.


Running tests

make test-langgraph                  # 7 replay + 1 create_agent smoke test

Design rationale

See ~/.claude-duc/plans/plan-langgraph-adapter.md for the full design walkthrough — why awrap_model_call over ToolNode subclassing, how the seal/dispatch loop maps to ModelResponse.result, and the KeyError: 'model' gotcha that requires the no-op after_model override.

For the underlying eager-dispatch mechanism (provider-agnostic), see the top-level METHOD.md.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

eager_tools_langgraph-0.3.0.tar.gz (13.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

eager_tools_langgraph-0.3.0-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file eager_tools_langgraph-0.3.0.tar.gz.

File metadata

File hashes

Hashes for eager_tools_langgraph-0.3.0.tar.gz
Algorithm Hash digest
SHA256 216655ae587162589f6091c0097e722b857fd3b3386aaa33dcd3a5de339d5fab
MD5 778736c547d55d3f475240d28e5c3309
BLAKE2b-256 74876a644dcae9418f3e0e376f5c54cc9a5a86ce28b3464c35bf7d1288eeebfa

See more details on using hashes here.

File details

Details for the file eager_tools_langgraph-0.3.0-py3-none-any.whl.

File metadata

File hashes

Hashes for eager_tools_langgraph-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0a9cd1ed42c0621dcda5561c692722af9331e14c0fc13f8932bbd15f571246cf
MD5 492ee050259b801df8a7f6db03c42d46
BLAKE2b-256 aa0a5c2e63db099fe7ca31ec458eb2eef5717711b05b4261805f46a35f1f7f03

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page