A2A protocol implementation for LangGraph agents
Project description
a2a-langgraph
The opinionated way to expose a LangGraph agent as an A2A server.
Why this exists
The A2A protocol is quickly becoming the standard for agent interoperability. If you build with LangGraph, connecting your graph to that ecosystem requires a fair amount of boilerplate: wiring up an AgentExecutor, a DefaultRequestHandler, a TaskStore, an agent card, and mounting everything onto an HTTP server.
Two alternatives exist, but neither works well today.
The first is 5enxia/langgraph-a2a-server. It targets an outdated version of the A2A SDK and is no longer maintained.
The second is LangChain's own AgentServer. It handles the wiring for you but trades away control of your API. You lose the ability to define and manage your own endpoints alongside the agent, which matters in real production services. This gap is actively discussed in the LangChain forum.
a2a-langgraph takes a third path: a small, opinionated adapter that mounts A2A endpoints directly onto your existing FastAPI app, leaving everything else under your control.
Two stabilization events make this the right moment to build something lasting:
- LangGraph >= 1.0 shipped in November 2025, stabilizing the graph and state APIs.
- a2a-sdk 1.0 released in April 2026, making the protocol ready for production use.
What it does
a2a-langgraph wraps a compiled LangGraph in an A2A AgentExecutor and mounts two endpoints onto your FastAPI app:
GET {mount_path}/.well-known/agent-card.json— agent metadataPOST {mount_path}/— A2A JSON-RPC endpoint (message/send,message/stream, task operations)
You bring the graph. The library handles the rest.
Installation
pip install a2a-langgraph[fastapi]
Or with uv:
uv add a2a-langgraph[fastapi]
The fastapi extra pulls in FastAPI, Uvicorn, and the A2A HTTP server components.
Quickstart
from fastapi import FastAPI
from langchain_openai import ChatOpenAI
from langgraph.graph import END, START, MessagesState, StateGraph
from a2a_langgraph.fastapi import add_langgraph_fastapi_endpoint
# Build your LangGraph
llm = ChatOpenAI(model="gpt-4o-mini")
def chatbot(state: MessagesState):
return {"messages": [llm.invoke(state["messages"])]}
builder = StateGraph(MessagesState)
builder.add_node("chatbot", chatbot)
builder.add_edge(START, "chatbot")
builder.add_edge("chatbot", END)
graph = builder.compile()
# Create your FastAPI app
app = FastAPI()
# Mount the A2A endpoint
add_langgraph_fastapi_endpoint(
app,
graph=graph,
mount_path="/agent",
agent_name="My LangGraph Agent",
agent_description="A simple chatbot exposed over A2A",
base_url="http://localhost:8000",
)
Run with:
uvicorn main:app --reload
Your agent is now available at http://localhost:8000/agent and discoverable via its agent card.
Configuration
add_langgraph_fastapi_endpoint accepts the most common options as keyword arguments: agent_name, agent_description, version, base_url, and skills. You can also pass a custom task_store or an executor_cls to override the default executor.
For the full list of options, see A2ALangGraphConfig.
Custom output mapping
By default, the library reads the last AIMessage from the messages key of your graph's state. If your graph uses a different output structure, pass a custom output_mapper:
from a2a_langgraph import LangGraphAgentExecutor
from a2a_langgraph.fastapi import add_langgraph_fastapi_endpoint
def my_output_mapper(result: dict) -> str:
return result["my_custom_key"]
add_langgraph_fastapi_endpoint(
app,
graph=graph,
mount_path="/agent",
output_mapper=my_output_mapper,
)
Current limitations
- Output is read from
messagesby default. Graphs that store their final output in a custom state key need a customoutput_mapper(see above). - No token-level streaming yet. The agent sends a single A2A message after the graph finishes. Streaming intermediate tokens is on the roadmap.
Roadmap
- Token-level streaming from LangGraph to A2A events
- Declarative custom state key support, aligned with both LangGraph and A2A best practices
- Starlette adapter for non-FastAPI setups
- LangGraph persistence and checkpointer integration
Contributing
Fork the repo, open an issue, and send a pull request. All contributions are welcome.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file a2a_langgraph-0.1.0.tar.gz.
File metadata
- Download URL: a2a_langgraph-0.1.0.tar.gz
- Upload date:
- Size: 180.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2323fd780ca50a537c3f0b90b3e769c558284c479457e65585bfd18db0aeb1ab
|
|
| MD5 |
2a2050c9792ae7ac92346cba89743649
|
|
| BLAKE2b-256 |
a72e6adafa5cab4cc0da688d8a187de855e7b0b61b20a9d7859907484ba4083e
|
File details
Details for the file a2a_langgraph-0.1.0-py3-none-any.whl.
File metadata
- Download URL: a2a_langgraph-0.1.0-py3-none-any.whl
- Upload date:
- Size: 9.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
eeaa34851d1e96ed8703e2ab46d57feb7e5e3a43063411e96fc3529bd271c0a1
|
|
| MD5 |
071947c21fc01506a6e3af8a177670c2
|
|
| BLAKE2b-256 |
9ca3f07d3825f6b5e35b961ed213980b1a7d321734637c2bbcbeb6801eef3262
|