LangGraph adapter for eager tool calling — middleware for langchain.agents.create_agent.
Project description
eager-tools-langgraph
LangGraph adapter for
eager-tools— a one-lineAgentMiddlewarethat overlaps tool execution with model streaming insidelangchain.agents.create_agent.
The middleware owns the model's astream(...) loop, watches tool_call_chunks
fly past, and dispatches each idempotent tool the moment its JSON block seals —
not after message_stop. Returns a ModelResponse(result=[AIMessage, ToolMessage₁, …])
so the agent commits the assistant message and eagerly-resolved tool results in
a single graph step.
See the parent repo README.md for the eager-dispatch
benchmark headline (1.20× – 1.50× over classic parallel dispatch across 16
workloads).
Install
pip install eager-tools-core eager-tools-langgraph langchain-anthropic
# (or langchain-openai — the middleware is provider-agnostic)
60-second quickstart
import asyncio
from eager_tools_langgraph import eager_middleware
from langchain.agents import create_agent
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
@tool
def read_file(path: str) -> str:
"""Read a file."""
return open(path).read()
# eager-tools needs a Tool-protocol object: name, idempotent flag, async __call__.
class ReadFileEager:
name = "read_file"
idempotent = True # safe to fire mid-stream
async def __call__(self, args: dict) -> str:
return open(args["path"]).read()
async def main() -> None:
agent = create_agent(
model=ChatAnthropic(model_name="claude-sonnet-4-5", timeout=60.0, stop=None),
tools=[read_file],
middleware=[eager_middleware({"read_file": ReadFileEager()})],
)
result = await agent.ainvoke({"messages": [HumanMessage("read README.md and summarize")]})
print(result["messages"][-1].content)
asyncio.run(main())
Runnable variants:
examples/06_langgraph_live.py—make example-6,ANTHROPIC_API_KEY+langchain-anthropic.examples/07_langgraph_openrouter.py—make example-7, any tool-capable OpenRouter model vialangchain-openai.examples/08_langgraph_compare.py—make example-8, runs the same workload sequential vs parallel vs eager and prints a timing summary.
OpenAI-compatible gateways (OpenRouter, vLLM, etc.): with
ChatOpenAI(base_url=…), pre-bind tools viamodel.bind_tools([…])before passing tocreate_agent— the agent's implicit binding doesn't always reach the underlying request. Examples 07 and 08 do this; example 06 doesn't need to (langchain-anthropicbinds correctly).
Version matrix
| Package | Tested floor | Why |
|---|---|---|
langgraph |
>=1.1.6 |
Middleware seam stable; subgraph stream_mode bug land. |
langchain |
>=1.0 |
langchain.agents.create_agent is the entry point. |
langchain-core |
>=1.2.14 |
PR #35281 fixed parallel tool_call_chunks merge. |
python |
>=3.11 |
Matches eager-tools-core. |
langgraph 1.1.6's own pyproject only floors langchain-core>=1.0.0, but a
real bug in tool_call_chunks merging was fixed at 1.2.14. We pin tighter
ourselves so the middleware doesn't silently lose parallel calls.
Provider coverage
The middleware is provider-agnostic: any BaseChatModel whose astream(...)
yields LangChain AIMessageChunk with tool_call_chunks works. That covers
langchain-anthropic, langchain-openai, and most community providers.
No per-provider chunk normalizer needed — LangChain has already done that work.
Honest limits
-
create_agentonly. RawStateGraphwith a custom model node is out of scope this version — theawrap_model_callmiddleware seam is bound tocreate_agent. If there's demand, aStateGraph-friendly helper lands in v0.2.1. -
Per-agent middleware. Subgraphs need the middleware re-registered on every
create_agentinstance. The framework does not auto-propagate it. -
add_messagesordering. The middleware commits[AIMessage, ToolMessage…]together. If you have a custom message reducer that re-sorts by timestamp, ordering is undefined. -
Streamed token observability. Because the middleware owns the
astream(...)loop, downstream callers usingstream_mode="messages"on the agent won't see token deltas from the wrapped node. Token-by-token UIs need to plumb chunks throughget_stream_writer()as acustomevent — open an issue if you need this baked in. -
Provider chunk shape variance. The adapter trusts LangChain's normalized
tool_call_chunksshape. Spec-compliant providers Just Work; for known upstream issues — GPT-5 content+tool_calls interleaving (langchain-ai/langchain#6510), Gemini parallel call drops (#10196) — the fix has to land upstream first.
Running tests
make test-langgraph # 7 replay + 1 create_agent smoke test
Design rationale
See ~/.claude-duc/plans/plan-langgraph-adapter.md for the full design
walkthrough — why awrap_model_call over ToolNode subclassing, how the
seal/dispatch loop maps to ModelResponse.result, and the KeyError: 'model'
gotcha that requires the no-op after_model override.
For the underlying eager-dispatch mechanism (provider-agnostic), see the
top-level METHOD.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file eager_tools_langgraph-0.3.0.tar.gz.
File metadata
- Download URL: eager_tools_langgraph-0.3.0.tar.gz
- Upload date:
- Size: 13.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.4.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
216655ae587162589f6091c0097e722b857fd3b3386aaa33dcd3a5de339d5fab
|
|
| MD5 |
778736c547d55d3f475240d28e5c3309
|
|
| BLAKE2b-256 |
74876a644dcae9418f3e0e376f5c54cc9a5a86ce28b3464c35bf7d1288eeebfa
|
File details
Details for the file eager_tools_langgraph-0.3.0-py3-none-any.whl.
File metadata
- Download URL: eager_tools_langgraph-0.3.0-py3-none-any.whl
- Upload date:
- Size: 9.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.4.17
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a9cd1ed42c0621dcda5561c692722af9331e14c0fc13f8932bbd15f571246cf
|
|
| MD5 |
492ee050259b801df8a7f6db03c42d46
|
|
| BLAKE2b-256 |
aa0a5c2e63db099fe7ca31ec458eb2eef5717711b05b4261805f46a35f1f7f03
|