Synqed - A wrapper around A2A for simplified multi-agent systems interaction and communication
Project description
Synqed Python API library
Synqed enables true AI-to-AI interaction and multi-agent collaboration.
Agents can talk to each other, collaborate, coordinate, delegate tasks, and solve problems together—letting you build actual multi-agent systems where agents truly work as a team.
🤝 True Collaboration, Not Just Delegation
Unlike traditional multi-agent systems that just assign tasks in parallel, Synqed enables genuine collaboration where agents:
- 👀 See what other agents are working on
- 💬 Provide feedback to each other
- 🔄 Refine their work based on peer input
- 🎯 Create integrated, cohesive solutions together
All seamless. All autonomous.
Synqed also lets agents from any provider—OpenAI, Anthropic, Google, or local models—communicate as part of the same system.
Documentation
For full API documentation, see here
Installation
# install from PyPI
pip install synqed
Synqed works with the following LLM providers. Install your preferred provider:
pip install openai # For OpenAI (GPT-4, GPT-4o, etc.)
pip install anthropic # For Anthropic (Claude)
pip install google-generativeai # For Google (Gemini)
Usage
Quick Start: Your First Agent
Here's the fastest way to get started:
Create a file my_agent.py:
import asyncio
import os
import synqed
async def agent_logic(context):
"""Your agent's brain - this is where the magic happens."""
user_message = context.get_user_input()
# Use any LLM you want
from openai import AsyncOpenAI
client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": user_message}
]
)
return response.choices[0].message.content
async def main():
# Create your agent
agent = synqed.Agent(
name="MyFirstAgent",
description="A helpful AI assistant",
skills=["general_assistance", "question_answering"],
executor=agent_logic
)
# Start the server
server = synqed.AgentServer(agent, port=8000)
print(f"Agent running at {agent.url}")
await server.start()
if __name__ == "__main__":
asyncio.run(main())
Step 2: Connect a Client
Create a file client.py:
import asyncio
import synqed
async def main():
async with synqed.Client("http://localhost:8000") as client:
# Option 1: Simple request-response
response = await client.ask("What are the top 3 most popular songs of all time?")
print(f"Agent: {response}")
# Option 2: Streaming response (like ChatGPT typing)
print("Streaming: ", end="")
async for chunk in client.stream("Tell me a joke"):
print(chunk, end="", flush=True)
print()
if __name__ == "__main__":
asyncio.run(main())
Step 3: Run It
# Terminal 1 - Start your agent
python my_agent.py
# Terminal 2 - Connect your client
python client.py
Congratulations! You just built and deployed your first AI agent.
Understanding Executor Functions
The executor is where you define your agent's behavior. It receives a context object and returns a response:
async def agent_logic(context):
"""
Args:
context: RequestContext with methods:
- get_user_input() → str: User's message
- get_task() → Task: Full task object
- get_message() → Message: Full message object
Returns:
str or Message: Agent's response
"""
user_message = context.get_user_input()
# Implement any logic:
# - Call LLMs (OpenAI, Anthropic, Google)
# - Query databases
# - Call external APIs
# - Delegate to other agents
return "Agent response"
Client Configuration
The client allows your agents to interact with other agents.
import synqed
# Default configuration
client = synqed.Client("http://localhost:8000")
# Custom timeout
client = synqed.Client(
agent_url="http://localhost:8000",
timeout=120.0 # 2 minutes (default is 60)
)
# Disable streaming
client = synqed.Client(
agent_url="http://localhost:8000",
streaming=False
)
# Override per-request
async with synqed.Client("http://localhost:8000") as client:
response = await client.with_options(timeout=30.0).ask("Quick question")
Agent Collaboration with Orchestrator
The Orchestrator uses an LLM to analyze tasks and intelligently route them to the most suitable agents.
Basic Orchestration
import synqed
import os
# Create orchestrator with LLM-powered routing
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Register your specialized agents to the orchestrator
orchestrator.register_agent(research_agent.card, "http://localhost:8001")
orchestrator.register_agent(coding_agent.card, "http://localhost:8002")
orchestrator.register_agent(writing_agent.card, "http://localhost:8003")
# Orchestrator automatically selects the best agent(s) for the task
result = await orchestrator.orchestrate(
"Research recent AI developments and write a technical summary"
)
print(f"Selected: {result.selected_agents[0].agent_name}")
print(f"Confidence: {result.selected_agents[0].confidence:.0%}")
print(f"Reasoning: {result.selected_agents[0].reasoning}")
Supported LLM Providers
import synqed
# OpenAI
synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="model-here"
)
# Anthropic
synqed.Orchestrator(
provider=synqed.LLMProvider.ANTHROPIC,
api_key=os.environ.get("ANTHROPIC_API_KEY"),
model="model-here"
)
# Google
synqed.Orchestrator(
provider=synqed.LLMProvider.GOOGLE,
api_key=os.environ.get("GOOGLE_API_KEY"),
model="model-here"
)
Orchestration Configuration
import synqed
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o",
temperature=0.7, # Creativity level (0.0 - 1.0)
max_tokens=2000 # Maximum response length
)
Multi-Agent Delegation
The TaskDelegator coordinates multiple agents working together on complex tasks:
import synqed
import os
# Create orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Create delegator
delegator = synqed.TaskDelegator(orchestrator=orchestrator)
# Register specialized agents (local or remote)
delegator.register_agent(agent=research_agent)
delegator.register_agent(agent=coding_agent)
delegator.register_agent(agent=writing_agent)
# Agents automatically collaborate on complex tasks
result = await delegator.submit_task(
"Research microservices patterns and write implementation guide"
)
🤝 Agent Collaboration (NEW!)
Beyond simple delegation, Synqed enables true agent collaboration where agents actively interact, provide feedback, and refine their work together.
Collaborative Workspace
The OrchestratedWorkspace creates a temporary environment where agents collaborate through structured phases:
import synqed
# Create orchestrator
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Create collaborative workspace
workspace = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=True # 🔑 Enables true collaboration!
)
# Register specialized agents
workspace.register_agent(research_agent)
workspace.register_agent(design_agent)
workspace.register_agent(development_agent)
# Agents will collaborate in 4 phases:
# 1. Share initial proposals
# 2. Provide peer feedback
# 3. Refine based on feedback
# 4. Produce integrated solution
result = await workspace.execute_task(
"Design a new mobile app feature for habit tracking"
)
Collaboration Phases
When enable_agent_discussion=True, agents go through structured collaboration:
Phase 1: Kickoff - All agents see the full context and team assignments
Phase 2: Proposals - Each agent shares their initial approach
🔬 Researcher: "I'll analyze user behavior patterns..."
🎨 Designer: "I'll create an intuitive daily tracking interface..."
💻 Developer: "I'll implement a notification system..."
Phase 3: Peer Feedback - Agents review and provide feedback
🔬 Researcher → Designer: "Great UI! Consider gamification based on my findings..."
🎨 Designer → Developer: "Can we use push notifications for streak reminders?"
💻 Developer → Researcher: "Your data suggests we need offline sync..."
Phase 4: Refinement - Agents refine work based on feedback
Each agent incorporates peer insights into their final deliverable
Delegation vs. Collaboration
# ❌ Traditional delegation (parallel, independent)
workspace = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=False # Faster, but no interaction
)
# ✅ True collaboration (sequential phases, interactive)
workspace = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=True # Slower, but higher quality
)
Accessing Collaboration Data
result = await workspace.execute_task(task)
# View all agent interactions
for msg in result.workspace_messages:
print(f"{msg['sender_name']}: {msg['content']}")
# Count feedback exchanges
feedback_count = len([m for m in result.workspace_messages
if 'feedback' in m.get('metadata', {})])
print(f"Agents exchanged {feedback_count} feedback messages")
When to Use Collaboration
✅ Use collaboration when:
- Task requires multiple perspectives
- Quality matters more than speed
- Agents have complementary skills
- Integration is important
❌ Use delegation when:
- Tasks are independent
- Speed is critical
- Simple, straightforward tasks
📚 Learn More: See AGENT_COLLABORATION_GUIDE.md for detailed documentation.
Remote Agent Registration
Register agents running anywhere:
# Register remote agent
delegator.register_agent(
agent_url="https://specialist-agent.example.com",
agent_card=agent_card # Optional pre-loaded card
)
Workspace & Collaboration
Basic Workspace
The Workspace provides a collaborative environment where agents can work together, share resources, and coordinate on complex tasks.
import synqed
# Create a workspace
workspace = synqed.Workspace(
name="Content Creation",
description="Collaborative space for research and writing"
)
# Add agents to workspace
workspace.add_agent(research_agent)
workspace.add_agent(writing_agent)
# Start collaboration
await workspace.start()
# Execute collaborative task
results = await workspace.collaborate(
"Research AI trends and write a comprehensive article"
)
# View results
for agent_name, response in results.items():
print(f"{agent_name}: {response}")
# Clean up
await workspace.close()
Orchestrated Workspace (Advanced)
The OrchestratedWorkspace automatically breaks complex tasks into subtasks, assigns them to the best agents, and orchestrates execution in a temporary environment.
import synqed
# Create orchestrator
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
# Create orchestrated workspace
orchestrated = synqed.OrchestratedWorkspace(
orchestrator=orchestrator,
enable_agent_discussion=True
)
# Register specialized agents
orchestrated.register_agent(research_agent)
orchestrated.register_agent(coding_agent)
orchestrated.register_agent(writing_agent)
orchestrated.register_agent(review_agent)
# Execute complex task - automatically:
# 1. Breaks into subtasks
# 2. Assigns to best agents
# 3. Creates temporary workspace
# 4. Executes in parallel where possible
# 5. Synthesizes final result
result = await orchestrated.execute_task(
"Research REST API best practices, write a FastAPI implementation, "
"create documentation, and review everything for quality"
)
print(f"Success: {result.success}")
print(f"Subtasks: {len(result.plan.subtasks)}")
print(f"Final result: {result.final_result}")
Advanced Workspace Features
# Create workspace with orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
provider=synqed.LLMProvider.OPENAI,
api_key=os.environ.get("OPENAI_API_KEY"),
model="gpt-4o"
)
workspace = synqed.Workspace(
name="Smart Collaboration",
enable_persistence=True, # Save workspace state
auto_cleanup=False # Keep artifacts
)
workspace.add_agent(agent1)
workspace.add_agent(agent2)
workspace.add_agent(agent3)
await workspace.start()
# Orchestrator selects best agents for the task
results = await workspace.collaborate(
"Complex multi-step task",
orchestrator=orchestrator
)
Sharing Artifacts and State
# Share data between agents
workspace.add_artifact(
name="data.json",
artifact_type="data",
content={"key": "value"},
created_by="agent1"
)
# Set shared state
workspace.set_shared_state("project_id", "proj-123")
# Get artifacts
artifacts = workspace.get_artifacts(artifact_type="data")
# Get shared state
project_id = workspace.get_shared_state("project_id")
Direct Agent Communication
# Send message to specific agent
response = await workspace.send_message_to_agent(
participant_id="agent-123",
message="Analyze this data"
)
# Broadcast to all agents
responses = await workspace.broadcast_message(
"Please provide status updates"
)
For detailed workspace documentation, see the Workspace Guide.
Complete Examples
Ready to dive deeper? Check out the complete, runnable examples here
Copyright © 2025 Synq Team. All rights reserved.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file synqed-1.0.38.tar.gz.
File metadata
- Download URL: synqed-1.0.38.tar.gz
- Upload date:
- Size: 149.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f037d1ef18339ccedfe522269f5d8895e7811ce1443f2ea0677ac5291e1e9a44
|
|
| MD5 |
81d4556c21c6691ae639079781fb8b25
|
|
| BLAKE2b-256 |
0612316615253966bae4b4894f5fa60e8b3102fe9747fe1dc927391f60f40154
|
File details
Details for the file synqed-1.0.38-py3-none-any.whl.
File metadata
- Download URL: synqed-1.0.38-py3-none-any.whl
- Upload date:
- Size: 45.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
964def431edb01859c4bd0062a2971b7e88789a7d7fc5c2c6e4d6d236e22b35e
|
|
| MD5 |
402dffaaa46013308cce0c6ca2f5a165
|
|
| BLAKE2b-256 |
40993f98a6408c2bce388f7c8d8e9f388a5bd2f75f389134990e29b318daad61
|