Skip to main content

A2A Protocol Adapter SDK for integrating various agent frameworks

Project description

A2A Adapter

PyPI version License: Apache-2.0 Python 3.11+ Code style: black

๐Ÿš€ Open Source A2A Protocol Adapter SDK - Make Any Agent Framework A2A-Compatible in 3 Lines

A Python SDK that enables seamless integration of various agent frameworks (n8n, CrewAI, LangChain, etc.) with the A2A (Agent-to-Agent) Protocol. Build interoperable AI agent systems that can communicate across different platforms and frameworks.

โœจ Key Benefits:

  • ๐Ÿ”Œ 3-line setup - Expose any agent as A2A-compliant
  • ๐ŸŒ Framework agnostic - Works with n8n, CrewAI, LangChain, and more
  • ๐ŸŒŠ Streaming support - Built-in streaming for real-time responses
  • ๐ŸŽฏ Production ready - Type-safe, well-tested, and actively maintained

Features

โœจ Framework Agnostic: Integrate n8n workflows, CrewAI crews, LangChain chains, and more ๐Ÿ”Œ Simple API: 3-line setup to expose any agent as A2A-compliant ๐ŸŒŠ Streaming Support: Built-in streaming for LangChain and custom adapters ๐ŸŽฏ Type Safe: Leverages official A2A SDK types ๐Ÿ”ง Extensible: Easy to add custom adapters for new frameworks ๐Ÿ“ฆ Minimal Dependencies: Optional dependencies per framework

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   A2A Caller    โ”‚  (Other A2A Agents)
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚ A2A Protocol (HTTP + JSON-RPC 2.0)
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  A2A Adapter    โ”‚  (This SDK)
โ”‚   - N8n         โ”‚
โ”‚   - CrewAI      โ”‚
โ”‚   - LangChain   โ”‚
โ”‚   - Custom      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Your Agent     โ”‚  (n8n workflow / CrewAI crew / Chain)
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Single-Agent Design: Each server hosts exactly one agent. Multi-agent orchestration is handled externally via A2A protocol or orchestration frameworks like LangGraph.

See ARCHITECTURE.md for detailed design documentation.

Documentation

Installation

Basic Installation

pip install a2a-adapter

With Framework Support

# For n8n (HTTP webhooks)
pip install a2a-adapter

# For CrewAI
pip install a2a-adapter[crewai]

# For LangChain
pip install a2a-adapter[langchain]

# For LangGraph
pip install a2a-adapter[langgraph]

# Install all frameworks
pip install a2a-adapter[all]

# For development
pip install a2a-adapter[dev]

๐Ÿš€ Quick Start

Get started in 5 minutes! See QUICKSTART.md for detailed guide.

Install

pip install a2a-adapter

Your First Agent (3 Lines!)

import asyncio
from a2a_adapter import load_a2a_agent, serve_agent
from a2a.types import AgentCard

async def main():
    adapter = await load_a2a_agent({
        "adapter": "n8n",
        "webhook_url": "https://your-n8n.com/webhook/workflow"
    })
    serve_agent(
        agent_card=AgentCard(name="My Agent", description="..."),
        adapter=adapter
    )

asyncio.run(main())

That's it! Your agent is now A2A-compatible and ready to communicate with other A2A agents.

๐Ÿ‘‰ Read the full Quick Start Guide โ†’

๐Ÿ“– Usage Examples

n8n Workflow โ†’ A2A Agent

adapter = await load_a2a_agent({
    "adapter": "n8n",
    "webhook_url": "https://n8n.example.com/webhook/math"
})

CrewAI Crew โ†’ A2A Agent

adapter = await load_a2a_agent({
    "adapter": "crewai",
    "crew": your_crew_instance
})

LangChain Chain โ†’ A2A Agent (with Streaming)

adapter = await load_a2a_agent({
    "adapter": "langchain",
    "runnable": your_chain,
    "input_key": "input"
})

Custom Function โ†’ A2A Agent

async def my_agent(inputs: dict) -> str:
    return f"Processed: {inputs['message']}"

adapter = await load_a2a_agent({
    "adapter": "callable",
    "callable": my_agent
})

๐Ÿ“š View all examples โ†’

Advanced Usage

Custom Adapter Class

For full control, subclass BaseAgentAdapter:

from a2a_adapter import BaseAgentAdapter
from a2a.types import Message, MessageSendParams, TextPart

class SentimentAnalyzer(BaseAgentAdapter):
    async def to_framework(self, params: MessageSendParams):
        # Extract user message
        text = params.messages[-1].content[0].text
        return {"text": text}

    async def call_framework(self, framework_input, params):
        # Your analysis logic
        sentiment = analyze_sentiment(framework_input["text"])
        return {"sentiment": sentiment}

    async def from_framework(self, framework_output, params):
        # Convert to A2A Message
        return Message(
            role="assistant",
            content=[TextPart(
                type="text",
                text=f"Sentiment: {framework_output['sentiment']}"
            )]
        )

# Use your custom adapter
adapter = SentimentAnalyzer()
serve_agent(agent_card=card, adapter=adapter, port=8004)

Streaming Custom Adapter

Implement handle_stream() for streaming responses:

class StreamingAdapter(BaseAgentAdapter):
    async def handle_stream(self, params: MessageSendParams):
        """Yield SSE-compatible events."""
        for chunk in generate_response_chunks():
            yield {
                "event": "message",
                "data": json.dumps({"type": "content", "content": chunk})
            }

        yield {
            "event": "done",
            "data": json.dumps({"status": "completed"})
        }

    def supports_streaming(self):
        return True

Using with LangGraph

Integrate A2A agents into LangGraph workflows:

from langgraph.graph import StateGraph
from a2a.client import A2AClient

# Create A2A client
math_agent = A2AClient(base_url="http://localhost:9000")

# Use in LangGraph node
async def call_math_agent(state):
    response = await math_agent.send_message(
        MessageSendParams(messages=[...])
    )
    return {"result": response}

# Add to graph
graph = StateGraph(...)
graph.add_node("math", call_math_agent)

See examples/06_langgraph_single_agent.py for complete example.

Configuration

N8n Adapter

{
    "adapter": "n8n",
    "webhook_url": "https://n8n.example.com/webhook/agent",  # Required
    "timeout": 30,  # Optional, default: 30
    "headers": {    # Optional
        "Authorization": "Bearer token"
    }
}

CrewAI Adapter

{
    "adapter": "crewai",
    "crew": crew_instance,  # Required: CrewAI Crew object
    "inputs_key": "inputs"  # Optional, default: "inputs"
}

LangChain Adapter

{
    "adapter": "langchain",
    "runnable": chain,       # Required: Any Runnable
    "input_key": "input",    # Optional, default: "input"
    "output_key": None       # Optional, extracts specific key from output
}

Callable Adapter

{
    "adapter": "callable",
    "callable": async_function,      # Required: async function
    "supports_streaming": False      # Optional, default: False
}

Examples

The examples/ directory contains complete working examples:

  • 01_single_n8n_agent.py - N8n workflow agent
  • 02_single_crewai_agent.py - CrewAI multi-agent crew
  • 03_single_langchain_agent.py - LangChain streaming agent
  • 04_single_agent_client.py - A2A client for testing
  • 05_custom_adapter.py - Custom adapter implementations
  • 06_langgraph_single_agent.py - LangGraph + A2A integration

Run any example:

# Start an agent server
python examples/01_single_n8n_agent.py

# In another terminal, test with client
python examples/04_single_agent_client.py

Testing

# Install dev dependencies
pip install a2a-adapter[dev]

# Run unit tests
pytest tests/unit/

# Run integration tests (requires framework dependencies)
pytest tests/integration/

# Run all tests
pytest

API Reference

Core Functions

load_a2a_agent(config: Dict[str, Any]) -> BaseAgentAdapter

Factory function to create an adapter from configuration.

Args:

  • config: Dictionary with "adapter" key and framework-specific options

Returns:

  • Configured BaseAgentAdapter instance

Raises:

  • ValueError: If adapter type is unknown or required config is missing
  • ImportError: If required framework package is not installed

build_agent_app(agent_card: AgentCard, adapter: BaseAgentAdapter) -> ASGIApp

Build an ASGI application for serving an A2A agent.

Args:

  • agent_card: A2A AgentCard describing the agent
  • adapter: Adapter instance

Returns:

  • ASGI application ready to be served

serve_agent(agent_card, adapter, host="0.0.0.0", port=9000, **kwargs)

Start serving an A2A agent (convenience function).

Args:

  • agent_card: A2A AgentCard
  • adapter: Adapter instance
  • host: Host address (default: "0.0.0.0")
  • port: Port number (default: 9000)
  • **kwargs: Additional arguments passed to uvicorn.run()

BaseAgentAdapter

Abstract base class for all adapters.

Methods

async def handle(params: MessageSendParams) -> Message | Task

Handle a non-streaming A2A message request.

async def handle_stream(params: MessageSendParams) -> AsyncIterator[Dict]

Handle a streaming A2A message request. Override in subclasses that support streaming.

@abstractmethod async def to_framework(params: MessageSendParams) -> Any

Convert A2A message parameters to framework-specific input.

@abstractmethod async def call_framework(framework_input: Any, params: MessageSendParams) -> Any

Execute the underlying agent framework.

@abstractmethod async def from_framework(framework_output: Any, params: MessageSendParams) -> Message | Task

Convert framework output to A2A Message or Task.

def supports_streaming() -> bool

Check if this adapter supports streaming responses.

Framework Support

Framework Adapter Non-Streaming Streaming Status
n8n N8nAgentAdapter โœ… ๐Ÿ”œ Planned โœ… Stable
CrewAI CrewAIAgentAdapter ๐Ÿ”œ Planned ๐Ÿ”œ Planned ๐Ÿ”œ Planned
LangChain LangChainAgentAdapter ๐Ÿ”œ Planned ๐Ÿ”œ Planned ๐Ÿ”œ Planned

๐Ÿค Contributing

We welcome contributions from the community! Whether you're fixing bugs, adding features, or improving documentation, your help makes this project better.

Ways to contribute:

  • ๐Ÿ› Report bugs - Help us improve by reporting issues
  • ๐Ÿ’ก Suggest features - Share your ideas for new adapters or improvements
  • ๐Ÿ”ง Add adapters - Integrate new agent frameworks (AutoGen, Semantic Kernel, etc.)
  • ๐Ÿ“ Improve docs - Make documentation clearer and more helpful
  • ๐Ÿงช Write tests - Increase test coverage and reliability

Quick start contributing:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Run tests (pytest)
  5. Submit a pull request

๐Ÿ“– Read our Contributing Guide โ†’ for detailed instructions, coding standards, and development setup.

Roadmap

  • Core adapter abstraction
  • N8n adapter
  • CrewAI adapter
  • LangChain adapter with streaming
  • Callable adapter
  • Comprehensive examples
  • Task support (async execution pattern)
  • Artifact support (file uploads/downloads)
  • AutoGen adapter
  • Semantic Kernel adapter
  • Haystack adapter
  • Middleware system (logging, metrics, rate limiting)
  • Configuration validation with Pydantic
  • Docker images for quick deployment

FAQ

Q: Can I run multiple agents in one process?

A: This SDK is designed for single-agent-per-process. For multi-agent systems, run multiple A2A servers and orchestrate them externally using the A2A protocol or tools like LangGraph.

Q: Does this support the latest A2A protocol version?

A: Yes, we use the official A2A SDK which stays up-to-date with protocol changes.

Q: Can I use this with my custom agent framework?

A: Absolutely! Use the CallableAgentAdapter for simple cases or subclass BaseAgentAdapter for full control.

Q: What about authentication and rate limiting?

A: These concerns are handled at the infrastructure level (reverse proxy, API gateway) or by the official A2A SDK. Adapters focus solely on framework integration.

Q: How do I debug adapter issues?

A: Set log_level="debug" in serve_agent() and check logs. Each adapter logs framework calls and responses.

License

Apache-2.0 License - see LICENSE file for details.

Credits

Built with โค๏ธ by HYBRO AI

Powered by the A2A Protocol

๐Ÿ’ฌ Community & Support

๐Ÿ“„ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

๐Ÿ™ Acknowledgments


โญ Star this repo if you find it useful! โญ

โฌ† Back to Top

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

a2a_adapter-0.1.3.tar.gz (30.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

a2a_adapter-0.1.3-py3-none-any.whl (30.3 kB view details)

Uploaded Python 3

File details

Details for the file a2a_adapter-0.1.3.tar.gz.

File metadata

  • Download URL: a2a_adapter-0.1.3.tar.gz
  • Upload date:
  • Size: 30.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for a2a_adapter-0.1.3.tar.gz
Algorithm Hash digest
SHA256 7abbc917e8f0c19f97e8e673ff3c996f9d6c71a940440466011eab3f7b7e1491
MD5 eee89a6c87fcadf6d5f29d66d8d623cb
BLAKE2b-256 d863c41dcfd357cda59090d3c82bd8024a57173406430a75cdf36a0cc2c4c118

See more details on using hashes here.

File details

Details for the file a2a_adapter-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: a2a_adapter-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 30.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for a2a_adapter-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e31d331022b77a9bf2ca7aae861862f1d9f3b03a2a95cb5067e85f89668e6765
MD5 2aaccddde5d54bffaa91aa3d5b7fad04
BLAKE2b-256 c0fdb1d284b60820301c6870bc3646b9aea126a967bc0024c7a078f12801152b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page