Synqed - A wrapper around A2A for simplified multi-agent systems interaction and communication
Project description
Synqed Python API library
Synqed enables true AI-to-AI interaction and multi-agent collaboration.
Agents can talk to each other, collaborate, coordinate, delegate tasks, and solve problems together—letting you build actual multi-agent systems where agents truly work as a team.
🤝 True Collaboration, Not Just Delegation
Unlike traditional multi-agent systems that just assign tasks in parallel, Synqed enables genuine collaboration where agents:
- 👀 See what other agents are working on
- 💬 Provide feedback to each other
- 🔄 Refine their work based on peer input
- 🎯 Create integrated, cohesive solutions together
All seamless. All autonomous.
Synqed also lets agents from any provider—OpenAI, Anthropic, Google, or local models—communicate as part of the same system.
🌐 Universal Substrate
Synqed acts as a universal substrate for AI agents. Any agent that speaks the A2A (Agent-to-Agent) protocol can join a Synqed workspace, regardless of how it was built:
- ✅ Mix Synqed agents with agents built using
a2a-pythonSDK - ✅ Mix Synqed agents with agents from ANY framework that implements A2A
- ✅ Route transparently - agents don't know if peers are local or remote
- ✅ No wrapping or adaptation needed - just routing!
See examples/universal_demo/ for a working demo mixing local Synqed agents with remote A2A agents.
🚀 Quick Links
- Complete Examples - Working code in
examples/directory - Getting Started - Install and run your first agent
- Multi-Agent Collaboration - Agent-to-agent communication
- Execution Patterns - Sequential, parallel, and hierarchical
- API Documentation - Full API reference
Documentation
For full API documentation, see here
Installation
# Install from PyPI
pip install synqed
Synqed works with the following LLM providers. Install your preferred provider:
pip install openai # For OpenAI (GPT-4, GPT-4o, etc.)
pip install anthropic # For Anthropic (Claude)
pip install google-generativeai # For Google (Gemini)
Environment Setup
Most examples use environment variables for API keys. Create a .env file:
# For OpenAI examples
OPENAI_API_KEY='your-openai-api-key'
# For Anthropic examples (most examples use this)
ANTHROPIC_API_KEY='your-anthropic-api-key'
# For Google examples
GOOGLE_API_KEY='your-google-api-key'
Install python-dotenv to load environment variables:
pip install python-dotenv
Usage
Quick Start: Your First Agent
The fastest way to get started is with the included examples:
# Clone or navigate to the examples directory
cd examples/intro
# Start your first agent (Terminal 1)
python synqed_agent.py
# Connect a client (Terminal 2)
python synqed_client.py
Congratulations! You just ran your first AI agent.
Want to build from scratch? Here's a minimal example:
import asyncio
import os
import synqed
async def agent_logic(context):
"""Your agent's brain - this is where the magic happens."""
user_message = context.get_user_input()
# Use any LLM you want
from openai import AsyncOpenAI
client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message}
]
)
return response.choices[0].message.content
async def main():
# Create your agent
agent = synqed.Agent(
name="MyFirstAgent",
description="A helpful AI assistant",
skills=["general_assistance", "question_answering"],
executor=agent_logic
)
# Start the server
server = synqed.AgentServer(agent, port=8000)
print(f"Agent running at {agent.url}")
await server.start()
if __name__ == "__main__":
asyncio.run(main())
See examples/intro/synqed_agent.py for a complete working example with detailed comments.
Understanding Agent Logic Functions
Your agent logic function is where you define your agent's behavior. For single-agent use cases, it receives a context object and returns a response string. For multi-agent collaboration, it returns a structured dict.
Single-Agent Logic (with executor parameter):
async def agent_logic(context):
"""
Args:
context: RequestContext with methods:
- get_user_input() → str: User's message
- get_task() → Task: Full task object
- get_message() → Message: Full message object
Returns:
str: Agent's response
"""
user_message = context.get_user_input()
# Implement any logic:
# - Call LLMs (OpenAI, Anthropic, Google)
# - Query databases
# - Call external APIs
return "Agent response"
# Create agent with executor parameter
agent = synqed.Agent(
name="MyAgent",
description="A helpful assistant",
executor=agent_logic # Single-agent mode
)
Multi-Agent Logic (with logic parameter):
async def agent_logic(context: synqed.AgentLogicContext) -> dict:
"""
Args:
context: AgentLogicContext with:
- latest_message: Latest incoming message
- memory: Agent's message history
- get_conversation_history(): Formatted conversation
- build_response(): Helper to build responses
- workspace: Current workspace
- agent_name: Agent's name
Returns:
dict: {"send_to": "TargetAgent", "content": "message"}
"""
latest = context.latest_message
if not latest:
return context.build_response("OtherAgent", "Ready!")
# Get conversation history
history = context.get_conversation_history()
# Use any LLM to generate response
# ... (call your LLM here)
# Return structured response
return context.build_response("TargetAgent", "My response")
# Create agent with logic parameter
agent = synqed.Agent(
name="MyAgent",
description="Collaborative agent",
logic=agent_logic, # Multi-agent mode
default_target="OtherAgent"
)
See examples/intro/synqed_agent.py for single-agent examples and examples/intro/workspace.py for multi-agent examples.
Client Configuration
The client allows your agents to interact with other agents.
import synqed
# Default configuration
client = synqed.Client("http://localhost:8000")
# Custom timeout
client = synqed.Client(
agent_url="http://localhost:8000",
timeout=120.0 # 2 minutes (default is 60)
)
# Disable streaming
client = synqed.Client(
agent_url="http://localhost:8000",
streaming=False
)
# Override per-request
async with synqed.Client("http://localhost:8000") as client:
response = await client.with_options(timeout=30.0).ask("Quick question")
🤝 Multi-Agent Collaboration
Synqed's workspace-based messaging system enables true agent-to-agent communication where agents:
- Maintain their own server-side message memory
- Exchange structured messages within workspaces
- Collaborate naturally through iterative communication
- Work together without conversation history blobs
Architecture
The system consists of core components:
- Agent: Agent with built-in memory and logic functions
- Workspace: Logical routing domain where agents collaborate
- WorkspaceExecutionEngine: Executes agents with event-driven scheduling
- AgentLogicContext: Provides conversation history and message building helpers
Basic Two-Agent Collaboration
See examples/intro/workspace.py for a complete working example of Writer and Editor collaborating:
cd examples/intro
python workspace.py
Here's a simplified version showing the key concepts:
import asyncio
import os
from synqed import Agent, AgentLogicContext
async def writer_logic(context: AgentLogicContext) -> dict:
"""Writer agent logic."""
latest = context.latest_message
if not latest:
return context.build_response("Editor", "I'm ready!")
# Get conversation history automatically
conversation_text = context.get_conversation_history()
# Use any LLM to generate response
# ... (call your LLM here)
# Return structured response
return context.build_response("Editor", "Here's my draft...")
async def editor_logic(context: AgentLogicContext) -> dict:
"""Editor agent logic."""
latest = context.latest_message
if not latest:
return context.build_response("Writer", "I'm ready!")
# Get conversation history
conversation_text = context.get_conversation_history()
# Process and provide feedback
return context.build_response("Writer", "Great work! Here's feedback...")
# For complete setup and execution, see examples/intro/workspace.py
Agent Logic Functions
Agent logic functions receive an AgentLogicContext with:
context.memory: Agent's message memorycontext.latest_message: Latest incoming messagecontext.get_conversation_history(): Auto-formatted conversation historycontext.build_response(): Helper for structured responsescontext.workspace: Current workspace referencecontext.agent_name: The agent's name
Logic functions must return a dict with "send_to" and "content" keys:
async def agent_logic(context: AgentLogicContext) -> dict:
# Access memory
latest = context.latest_message
all_messages = context.memory.get_messages()
# Get formatted conversation history (includes parsing of JSON messages)
conversation_text = context.get_conversation_history()
# Use any LLM to generate response
# ... (your LLM call here)
# Build response using helper
return context.build_response("TargetAgent", "Message content")
See examples/intro/workspace.py for complete examples of agent logic functions.
Key Benefits
✅ True Agent-to-Agent Communication: Agents send structured messages directly to each other
✅ Server-Side Memory: Each agent maintains its own message history
✅ Workspace Routing: Messages are routed through workspaces, enabling hierarchical collaboration
✅ Structured Responses: All responses follow JSON format with send_to and content
✅ Event-Driven Execution: WorkspaceExecutionEngine runs agents efficiently with automatic scheduling
✅ Parallel Execution: Multiple workspaces can execute simultaneously for true parallelism
See examples/intro/workspace.py for a complete two-agent collaboration example.
See examples/multi-agentic/ for advanced multi-team examples.
Modern Orchestration Pattern
The modern approach uses WorkspaceExecutionEngine with PlannerLLM for intelligent multi-agent orchestration:
import synqed
from pathlib import Path
# Create planner for intelligent task routing
planner = synqed.PlannerLLM(
provider="anthropic",
api_key=os.environ["ANTHROPIC_API_KEY"],
model="claude-sonnet-4-5"
)
# Create workspace manager
workspace_manager = synqed.WorkspaceManager(
workspaces_root=Path("/tmp/synqed_workspaces")
)
# Create execution engine
execution_engine = synqed.WorkspaceExecutionEngine(
planner=planner,
workspace_manager=workspace_manager,
enable_display=True,
max_agent_turns=10
)
# Execute multi-agent collaboration
await execution_engine.run(workspace_id)
See examples/multi-agentic/sequential_two_teams.py and examples/multi-agentic/parallel_three_teams.py for complete examples.
Legacy Orchestrator API
Note: The Orchestrator class below is deprecated. For new projects, use the WorkspaceExecutionEngine pattern shown above and in the examples.
The legacy Orchestrator uses an LLM to analyze tasks and intelligently route them to the most suitable agents.
Basic Orchestration
import synqed
import os
# Create orchestrator with LLM-powered routing
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Register your specialized agents to the orchestrator
orchestrator.register_agent(research_agent.card, "http://localhost:8001")
orchestrator.register_agent(coding_agent.card, "http://localhost:8002")
orchestrator.register_agent(writing_agent.card, "http://localhost:8003")
# Orchestrator automatically selects the best agent(s) for the task
result = await orchestrator.orchestrate(
"Research recent AI developments and write a technical summary"
)
print(f"Selected: {result.selected_agents[0].agent_name}")
print(f"Confidence: {result.selected_agents[0].confidence:.0%}")
print(f"Reasoning: {result.selected_agents[0].reasoning}")
Supported LLM Providers
import synqed
# OpenAI
synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="model-here"
)
# Anthropic
synqed.Orchestrator(
provider=synqed.LLMProvider.ANTHROPIC,
api_key=os.environ.get("ANTHROPIC_API_KEY"),
model="model-here"
)
# Google
synqed.Orchestrator(
provider=synqed.LLMProvider.GOOGLE,
api_key=os.environ.get("GOOGLE_API_KEY"),
model="model-here"
)
Orchestration Configuration
import synqed
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o",
temperature=0.7, # Creativity level (0.0 - 1.0)
max_tokens=2000 # Maximum response length
)
Multi-Agent Delegation
The TaskDelegator coordinates multiple agents working together on complex tasks:
import synqed
import os
# Create orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Create delegator
delegator = synqed.TaskDelegator(orchestrator=orchestrator)
# Register specialized agents (local or remote)
delegator.register_agent(agent=research_agent)
delegator.register_agent(agent=coding_agent)
delegator.register_agent(agent=writing_agent)
# Agents automatically collaborate on complex tasks
result = await delegator.submit_task(
"Research microservices patterns and write implementation guide"
)
🤝 Agent Collaboration (NEW!)
Beyond simple delegation, Synqed enables true agent collaboration where agents actively interact, provide feedback, and refine their work together.
Collaborative Workspace
The OrchestratedWorkspace creates a temporary environment where agents collaborate through structured phases:
import synqed
# Create orchestrator
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Create collaborative workspace
workspace = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=True # 🔑 Enables true collaboration!
)
# Register specialized agents
workspace.register_agent(research_agent)
workspace.register_agent(design_agent)
workspace.register_agent(development_agent)
# Agents will collaborate in 4 phases:
# 1. Share initial proposals
# 2. Provide peer feedback
# 3. Refine based on feedback
# 4. Produce integrated solution
result = await workspace.execute_task(
"Design a new mobile app feature for habit tracking"
)
Collaboration Phases
When enable_agent_discussion=True, agents go through structured collaboration:
Phase 1: Kickoff - All agents see the full context and team assignments
Phase 2: Proposals - Each agent shares their initial approach
🔬 Researcher: "I'll analyze user behavior patterns..."
🎨 Designer: "I'll create an intuitive daily tracking interface..."
💻 Developer: "I'll implement a notification system..."
Phase 3: Peer Feedback - Agents review and provide feedback
🔬 Researcher → Designer: "Great UI! Consider gamification based on my findings..."
🎨 Designer → Developer: "Can we use push notifications for streak reminders?"
💻 Developer → Researcher: "Your data suggests we need offline sync..."
Phase 4: Refinement - Agents refine work based on feedback
Each agent incorporates peer insights into their final deliverable
Delegation vs. Collaboration
# ❌ Traditional delegation (parallel, independent)
workspace = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=False # Faster, but no interaction
)
# ✅ True collaboration (sequential phases, interactive)
workspace = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=True # Slower, but higher quality
)
Accessing Collaboration Data
result = await workspace.execute_task(task)
# View all agent interactions
for msg in result.workspace_messages:
print(f"{msg['sender_name']}: {msg['content']}")
# Count feedback exchanges
feedback_count = len([m for m in result.workspace_messages
if 'feedback' in m.get('metadata', {})])
print(f"Agents exchanged {feedback_count} feedback messages")
When to Use Collaboration
✅ Use collaboration when:
- Task requires multiple perspectives
- Quality matters more than speed
- Agents have complementary skills
- Integration is important
❌ Use delegation when:
- Tasks are independent
- Speed is critical
- Simple, straightforward tasks
📚 Learn More: See AGENT_COLLABORATION_GUIDE.md for detailed documentation.
Remote Agent Registration
Register agents running anywhere:
# Register remote agent
delegator.register_agent(
agent_url="https://specialist-agent.example.com",
agent_card=agent_card # Optional pre-loaded card
)
Workspace Architecture
Synqed uses Workspaces as the fundamental unit of agent collaboration. A workspace is a logical routing domain where agents communicate and coordinate.
Core Components
- Workspace: Container for agents and their message routing
- WorkspaceManager: Creates and manages workspace lifecycle
- WorkspaceExecutionEngine: Executes agents with event-driven scheduling
- AgentRuntimeRegistry: Global registry for agent prototypes
Working with Workspaces
The modern workspace pattern (see examples/intro/workspace.py and examples/multi-agentic/):
import synqed
from pathlib import Path
# Step 1: Register agent prototypes
synqed.AgentRuntimeRegistry.register("Agent1", agent1)
synqed.AgentRuntimeRegistry.register("Agent2", agent2)
# Step 2: Create workspace manager
workspace_manager = synqed.WorkspaceManager(
workspaces_root=Path("/tmp/synqed_workspaces")
)
# Step 3: Create planner for orchestration
planner = synqed.PlannerLLM(
provider="anthropic",
api_key=os.environ["ANTHROPIC_API_KEY"],
model="claude-sonnet-4-5"
)
# Step 4: Create execution engine
execution_engine = synqed.WorkspaceExecutionEngine(
planner=planner,
workspace_manager=workspace_manager,
enable_display=True,
max_agent_turns=10
)
# Step 5: Create workspace and send initial message
workspace = await workspace_manager.create_workspace(
task_tree_node=task_node,
parent_workspace_id=None
)
await workspace.route_message("USER", "Agent1", "Task description", manager=workspace_manager)
# Step 6: Execute
await execution_engine.run(workspace.workspace_id)
See complete examples in examples/multi-agentic/ for full implementations.
Legacy Workspace API
Note: The basic Workspace class below has been replaced by WorkspaceManager + WorkspaceExecutionEngine. For new projects, use the pattern shown above.
The legacy Workspace provides a collaborative environment where agents can work together, share resources, and coordinate on complex tasks.
import synqed
# Create a workspace
workspace = synqed.Workspace(
name="Content Creation",
description="Collaborative space for research and writing"
)
# Add agents to workspace
workspace.add_agent(research_agent)
workspace.add_agent(writing_agent)
# Start collaboration
await workspace.start()
# Execute collaborative task
results = await workspace.collaborate(
"Research AI trends and write a comprehensive article"
)
# View results
for agent_name, response in results.items():
print(f"{agent_name}: {response}")
# Clean up
await workspace.close()
Hierarchical Workspaces
Synqed supports parent-child workspace relationships for complex orchestration:
# Create root workspace
root_workspace = await workspace_manager.create_workspace(
task_tree_node=root_task_node,
parent_workspace_id=None
)
# Create child workspaces
child_workspace_1 = await workspace_manager.create_workspace(
task_tree_node=child_task_node_1,
parent_workspace_id=root_workspace.workspace_id
)
child_workspace_2 = await workspace_manager.create_workspace(
task_tree_node=child_task_node_2,
parent_workspace_id=root_workspace.workspace_id
)
See examples/multi-agentic/sequential_two_teams.py and parallel_three_teams.py for complete hierarchical workspace examples.
Legacy Workspace Features
# Create workspace with orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
workspace = synqed.Workspace(
name="Smart Collaboration",
enable_persistence=True, # Save workspace state
auto_cleanup=False # Keep artifacts
)
workspace.add_agent(agent1)
workspace.add_agent(agent2)
workspace.add_agent(agent3)
await workspace.start()
# Orchestrator selects best agents for the task
results = await workspace.collaborate(
"Complex multi-step task",
orchestrator=orchestrator
)
Sharing Artifacts and State
# Share data between agents
workspace.add_artifact(
name="data.json",
artifact_type="data",
content={"key": "value"},
created_by="agent1"
)
# Set shared state
workspace.set_shared_state("project_id", "proj-123")
# Get artifacts
artifacts = workspace.get_artifacts(artifact_type="data")
# Get shared state
project_id = workspace.get_shared_state("project_id")
Direct Agent Communication
# Send message to specific agent
response = await workspace.send_message_to_agent(
participant_id="agent-123",
message="Analyze this data"
)
# Broadcast to all agents
responses = await workspace.broadcast_message(
"Please provide status updates"
)
For detailed workspace documentation, see the Workspace Guide.
Execution Patterns
Synqed supports different execution patterns for multi-agent collaboration:
Sequential Collaboration
Agents work together in turn-based cycles, passing work sequentially:
USER → Agent1 → Agent2 → Agent3 → USER
Use when: Tasks have dependencies, agents need to build on each other's work
Example: examples/multi-agentic/sequential_two_teams.py
Parallel Execution
Multiple agents or teams work simultaneously using broadcast delegation:
┌─→ Team1 (works in parallel)
Coordinator ──────┼─→ Team2 (works in parallel)
└─→ Team3 (works in parallel)
↓
Coordinator synthesizes
Use when: Tasks are independent, speed is important
Example: examples/multi-agentic/parallel_three_teams.py
Speedup: Potential 3x faster for 3 parallel teams
Hierarchical Workspaces
Organize agents in parent-child workspaces for complex orchestration:
Root Workspace (Orchestrator)
├─ Child Workspace 1 (Team A)
└─ Child Workspace 2 (Team B)
Use when: Large teams, natural hierarchy, subteam isolation
Example: Both sequential_two_teams.py and parallel_three_teams.py
Mixed Local/Remote Agents
Combine agents built with Synqed and external A2A agents in the same workspace:
Synqed Workspace
├─ Local Agent (Synqed)
├─ Local Agent (Synqed)
└─ Remote Agent (A2A protocol, any framework)
Use when: Integrating existing A2A agents, cross-ecosystem collaboration
Example: examples/universal_demo/universal_substrate_demo.py
Complete Examples
The examples/ directory contains fully working examples demonstrating different aspects of Synqed:
📚 Getting Started (examples/intro/)
Basic Agent Setup:
synqed_agent.py- Create and run your first AI agent with streaming supportsynqed_client.py- Connect to agents using bothask()andstream()methodsagent_card.py- Fetch and display agent capabilities and metadata
Multi-Agent Collaboration:
workspace.py- Two agents (Writer + Editor) collaborating in a workspace using the inbox-based messaging system
# Run the basic examples
cd examples/intro
python synqed_agent.py # Terminal 1 - start the agent
python synqed_client.py # Terminal 2 - connect as client
# Run workspace collaboration
python workspace.py
🚀 Advanced Multi-Agent Systems (examples/multi-agentic/)
Parallel Three Teams (parallel_three_teams.py)
- Demonstrates TRUE parallel execution with broadcast delegation
- 1 coordinator broadcasts to 3 research teams simultaneously
- Each team has 3 agents (Lead + Senior + Junior) who collaborate internally
- Teams work in parallel for 3x speedup potential
- Total: 10 agents across 4 workspaces
cd examples/multi-agentic
python parallel_three_teams.py
Sequential Two Teams (sequential_two_teams.py)
- Orchestrator pattern with hierarchical workspace delegation
- Project Manager coordinates Research Team and Development Team
- Each team has 3 specialized agents working together
- Total: 7 agents across 3 workspaces (1 root + 2 child teams)
cd examples/multi-agentic
python sequential_two_teams.py
🌐 Universal Substrate (examples/universal_demo/)
Key Concept: Synqed is a universal substrate that can route to ANY agent speaking A2A protocol, regardless of how it was built.
Code Review A2A Agent (code_review_a2a_agent.py)
- A standalone A2A agent built with
a2a-pythonSDK (NOT Synqed) - Runs as independent HTTP server on port 8001
- Demonstrates that Synqed can route to agents from ANY ecosystem
Universal Substrate Demo (universal_substrate_demo.py)
- Mixes local Synqed agents with remote A2A agents in the same workspace
- Coordinator (Synqed) → LocalWriter (Synqed) → RemoteCodeAgent (A2A)
- Shows transparent routing across different agent frameworks
- No wrapping or adaptation needed - just routing!
cd examples/universal_demo
python universal_substrate_demo.py
📋 Example Requirements
All examples require:
pip install synqed anthropic python-dotenv
Universal substrate examples additionally require:
pip install a2a-sdk aiohttp
Create a .env file in the example directory:
ANTHROPIC_API_KEY='your-key-here'
🚀 Deploying Global MCP Server to Fly.io
Synqed includes a Global MCP Server that enables universal agent-to-agent communication through the Model Context Protocol (MCP). This server can be deployed to Fly.io alongside your existing agent registry infrastructure.
What is the Global MCP Server?
The Global MCP Server provides:
- Universal MCP Tool Registry: Exposes all agent capabilities as MCP tools
- Cross-Agent Communication: Agents can call each other via MCP protocol
- Cloud-Native: Designed for production deployment on Fly.io
- Shared Infrastructure: Runs on the same Fly.io app as your email-based agent registry
Prerequisites
-
Install flyctl:
curl -L https://fly.io/install.sh | sh
-
Login to Fly.io:
flyctl auth login
-
Create Fly.io app (if not already created):
flyctl apps create synqed
Deploying to Fly.io
Deploy the MCP server using the provided script:
./scripts/deploy_mcp_fly.sh
This will:
- Build and deploy the MCP server Docker image
- Configure health checks for
/healthand/mcp/tools - Set up auto-scaling and HTTPS
- Deploy to the same
synqedapp as your email registry
Testing the Deployment
Once deployed, test the endpoints:
# Health check
curl https://synqed.fly.dev/health
# List available MCP tools
curl https://synqed.fly.dev/mcp/tools
# List registered agents
curl https://synqed.fly.dev/mcp/agents
# Call an MCP tool
curl -X POST https://synqed.fly.dev/mcp/call_tool \
-H 'Content-Type: application/json' \
-d '{"tool":"salesforce.query_leads","arguments":{"query":"SELECT * FROM Lead"}}'
Using the Cloud MCP Server
Configure your agents to use the cloud MCP server:
# Set environment variables
export SYNQ_MCP_MODE=cloud
export SYNQ_MCP_ENDPOINT=https://synqed.fly.dev/mcp
# Run your agent
python your_agent.py
Or in your Python code:
import os
os.environ["SYNQ_MCP_MODE"] = "cloud"
os.environ["SYNQ_MCP_ENDPOINT"] = "https://synqed.fly.dev/mcp"
# Your agent code here
# MCP middleware will automatically use the cloud endpoint
Architecture
The deployment uses a unified architecture:
┌─────────────────────────────────────┐
│ Fly.io App: synqed │
│ │
│ ┌───────────────────────────────┐ │
│ │ Email Agent Registry │ │
│ │ (existing) │ │
│ └───────────────────────────────┘ │
│ │
│ ┌───────────────────────────────┐ │
│ │ Global MCP Server │ │
│ │ - GET /health │ │
│ │ - GET /mcp/tools │ │
│ │ - GET /mcp/agents │ │
│ │ - POST /mcp/call_tool │ │
│ │ - POST /mcp/register_agent │ │
│ └───────────────────────────────┘ │
│ │
│ Shared: Router, Registry, Redis │
└─────────────────────────────────────┘
Local Development Mode
For local development, you can run without the cloud server:
# Default local mode (no environment variables needed)
python universal_mcp_demo.py
In local mode:
- MCP server runs in-process
- No network overhead
- Faster for development
- No cloud deployment required
Environment Variables
| Variable | Default | Description |
|---|---|---|
SYNQ_MCP_MODE |
local |
Set to cloud for production |
SYNQ_MCP_ENDPOINT |
https://synqed.fly.dev/mcp |
Cloud MCP server URL |
SYNQ_MCP_HOST |
0.0.0.0 |
Server bind address |
SYNQ_MCP_PORT |
8080 |
Server port |
Monitoring
Monitor your MCP server:
# View logs
flyctl logs -a synqed
# Check status
flyctl status -a synqed
# Scale up/down
flyctl scale count 2 -a synqed
Summary
Synqed provides a complete framework for building multi-agent AI systems:
🎯 Key Features
- True Multi-Agent Collaboration: Agents communicate, provide feedback, and refine work together
- Flexible Execution: Sequential, parallel, and hierarchical patterns
- Universal Substrate: Route to any A2A-compliant agent
- Memory Management: Each agent maintains its own conversation history
- Event-Driven: Efficient execution with automatic scheduling
📚 Learning Path
- Start with
examples/intro/synqed_agent.py- Create your first agent - Try
examples/intro/workspace.py- Two agents collaborating - Explore
examples/multi-agentic/sequential_two_teams.py- Hierarchical teams - Learn
examples/multi-agentic/parallel_three_teams.py- Parallel execution - Discover
examples/universal_demo/universal_substrate_demo.py- Cross-framework integration
🔗 Resources
- Complete Examples - Working code in
examples/directory - API Documentation - Full API reference
- GitHub Repository - Source code and issues
Copyright © 2025 Synq Team. All rights reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file synqed-1.1.49.tar.gz.
File metadata
- Download URL: synqed-1.1.49.tar.gz
- Upload date:
- Size: 318.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1619f5c4e092fb1e95c756a1c14551d87c202fc98325e3b70fcc488aece45445
|
|
| MD5 |
f9f466ef8ceb1c0b81c729eb31d2a892
|
|
| BLAKE2b-256 |
226e175ba7d2b55d36b86af0cc068f64149c444b77ee9e0459e9698b5513bc1b
|
File details
Details for the file synqed-1.1.49-py3-none-any.whl.
File metadata
- Download URL: synqed-1.1.49-py3-none-any.whl
- Upload date:
- Size: 142.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6c020b3d7f83fb61b150ec1590415421d28467f8b906bce84cd34760c5363f99
|
|
| MD5 |
eb33fb7ede9a1ac788ed8f12ef5e8ddc
|
|
| BLAKE2b-256 |
a3a2f9efec38d8dcd7cb5aa092eea89fb7824be20de11e89c034bfaa57f7fb42
|