Skip to main content

Synqed - A wrapper around A2A for simplified multi-agent systems interaction and communication

Project description

Synqed Python API library

Python Version

Synqed enables true AI-to-AI interaction.

Agents can talk to each other, collaborate, coordinate, delegate tasks, and solve problems together—letting you build actual multi-agent systems.

All seamless. All autonomous.

Synqed also lets agents from any provider—OpenAI, Anthropic, Google, or local models—communicate as part of the same system.

Documentation

For full API documentation, see here

Installation

# install from PyPI
pip install synqed

Synqed works with the following LLM providers. Install your preferred provider:

pip install openai                  # For OpenAI (GPT-4, GPT-4o, etc.)
pip install anthropic               # For Anthropic (Claude)
pip install google-generativeai     # For Google (Gemini)

Usage

Quick Start: Your First Agent

Here's the fastest way to get started:

Create a file my_agent.py:

import asyncio
import os
import synqed

async def agent_logic(context):
    """Your agent's brain - this is where the magic happens."""
    user_message = context.get_user_input()
    
    # Use any LLM you want
    from openai import AsyncOpenAI
    client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
    
    response = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": user_message}
        ]
    )
    
    return response.choices[0].message.content

async def main():
    # Create your agent
    agent = synqed.Agent(
        name="MyFirstAgent",
        description="A helpful AI assistant",
        skills=["general_assistance", "question_answering"],
        executor=agent_logic
    )
    
    # Start the server
    server = synqed.AgentServer(agent, port=8000)
    print(f"Agent running at {agent.url}")
    await server.start()

if __name__ == "__main__":
    asyncio.run(main())

Step 2: Connect a Client

Create a file client.py:

import asyncio
import synqed

async def main():
    async with synqed.Client("http://localhost:8000") as client:
        # Option 1: Simple request-response
        response = await client.ask("What are the top 3 most popular songs of all time?")
        print(f"Agent: {response}")
        
        # Option 2: Streaming response (like ChatGPT typing)
        print("Streaming: ", end="")
        async for chunk in client.stream("Tell me a joke"):
            print(chunk, end="", flush=True)
        print()

if __name__ == "__main__":
    asyncio.run(main())

Step 3: Run It

# Terminal 1 - Start your agent
python my_agent.py

# Terminal 2 - Connect your client
python client.py

Congratulations! You just built and deployed your first AI agent.


Understanding Executor Functions

The executor is where you define your agent's behavior. It receives a context object and returns a response:

async def agent_logic(context):
    """
    Args:
        context: RequestContext with methods:
            - get_user_input() → str: User's message
            - get_task() → Task: Full task object
            - get_message() → Message: Full message object
    
    Returns:
        str or Message: Agent's response
    """
    user_message = context.get_user_input()
    
    # Implement any logic:
    # - Call LLMs (OpenAI, Anthropic, Google)
    # - Query databases
    # - Call external APIs
    # - Delegate to other agents
    
    return "Agent response"

Client Configuration

The client allows your agents to interact with other agents.

import synqed

# Default configuration
client = synqed.Client("http://localhost:8000")

# Custom timeout
client = synqed.Client(
    agent_url="http://localhost:8000",
    timeout=120.0  # 2 minutes (default is 60)
)

# Disable streaming
client = synqed.Client(
    agent_url="http://localhost:8000",
    streaming=False
)

# Override per-request
async with synqed.Client("http://localhost:8000") as client:
    response = await client.with_options(timeout=30.0).ask("Quick question")

Agent Collaboration with Orchestrator

The Orchestrator uses an LLM to analyze tasks and intelligently route them to the most suitable agents.

Basic Orchestration

import synqed
import os

# Create orchestrator with LLM-powered routing
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

# Register your specialized agents to the orchestrator
orchestrator.register_agent(research_agent.card, "http://localhost:8001")
orchestrator.register_agent(coding_agent.card, "http://localhost:8002")
orchestrator.register_agent(writing_agent.card, "http://localhost:8003")

# Orchestrator automatically selects the best agent(s) for the task
result = await orchestrator.orchestrate(
    "Research recent AI developments and write a technical summary"
)

print(f"Selected: {result.selected_agents[0].agent_name}")
print(f"Confidence: {result.selected_agents[0].confidence:.0%}")
print(f"Reasoning: {result.selected_agents[0].reasoning}")

Supported LLM Providers

import synqed

# OpenAI
synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="model-here" 
)

# Anthropic
synqed.Orchestrator(
    provider=synqed.LLMProvider.ANTHROPIC,
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
    model="model-here"
)

# Google
synqed.Orchestrator(
    provider=synqed.LLMProvider.GOOGLE,
    api_key=os.environ.get("GOOGLE_API_KEY"),
    model="model-here"
)

Orchestration Configuration

import synqed

orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o",
    temperature=0.7,     # Creativity level (0.0 - 1.0)
    max_tokens=2000      # Maximum response length
)

Multi-Agent Delegation

The TaskDelegator coordinates multiple agents working together on complex tasks:

import synqed
import os

# Create orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

# Create delegator
delegator = synqed.TaskDelegator(orchestrator=orchestrator)

# Register specialized agents (local or remote)
delegator.register_agent(agent=research_agent)
delegator.register_agent(agent=coding_agent)
delegator.register_agent(agent=writing_agent)

# Agents automatically collaborate on complex tasks
result = await delegator.submit_task(
    "Research microservices patterns and write implementation guide"
)

Remote Agent Registration

Register agents running anywhere:

# Register remote agent
delegator.register_agent(
    agent_url="https://specialist-agent.example.com",
    agent_card=agent_card  # Optional pre-loaded card
)

Workspace & Collaboration

Basic Workspace

The Workspace provides a collaborative environment where agents can work together, share resources, and coordinate on complex tasks.

import synqed

# Create a workspace
workspace = synqed.Workspace(
    name="Content Creation",
    description="Collaborative space for research and writing"
)

# Add agents to workspace
workspace.add_agent(research_agent)
workspace.add_agent(writing_agent)

# Start collaboration
await workspace.start()

# Execute collaborative task
results = await workspace.collaborate(
    "Research AI trends and write a comprehensive article"
)

# View results
for agent_name, response in results.items():
    print(f"{agent_name}: {response}")

# Clean up
await workspace.close()

Orchestrated Workspace (Advanced)

The OrchestratedWorkspace automatically breaks complex tasks into subtasks, assigns them to the best agents, and orchestrates execution in a temporary environment.

import synqed

# Create orchestrator
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

# Create orchestrated workspace
orchestrated = synqed.OrchestratedWorkspace(
    orchestrator=orchestrator,
    enable_agent_discussion=True
)

# Register specialized agents
orchestrated.register_agent(research_agent)
orchestrated.register_agent(coding_agent)
orchestrated.register_agent(writing_agent)
orchestrated.register_agent(review_agent)

# Execute complex task - automatically:
# 1. Breaks into subtasks
# 2. Assigns to best agents
# 3. Creates temporary workspace
# 4. Executes in parallel where possible
# 5. Synthesizes final result
result = await orchestrated.execute_task(
    "Research REST API best practices, write a FastAPI implementation, "
    "create documentation, and review everything for quality"
)

print(f"Success: {result.success}")
print(f"Subtasks: {len(result.plan.subtasks)}")
print(f"Final result: {result.final_result}")

Advanced Workspace Features

# Create workspace with orchestrator for intelligent routing
orchestrator = synqed.Orchestrator(
    provider=synqed.LLMProvider.OPENAI,
    api_key=os.environ.get("OPENAI_API_KEY"),
    model="gpt-4o"
)

workspace = synqed.Workspace(
    name="Smart Collaboration",
    enable_persistence=True,  # Save workspace state
    auto_cleanup=False        # Keep artifacts
)

workspace.add_agent(agent1)
workspace.add_agent(agent2)
workspace.add_agent(agent3)

await workspace.start()

# Orchestrator selects best agents for the task
results = await workspace.collaborate(
    "Complex multi-step task",
    orchestrator=orchestrator
)

Sharing Artifacts and State

# Share data between agents
workspace.add_artifact(
    name="data.json",
    artifact_type="data",
    content={"key": "value"},
    created_by="agent1"
)

# Set shared state
workspace.set_shared_state("project_id", "proj-123")

# Get artifacts
artifacts = workspace.get_artifacts(artifact_type="data")

# Get shared state
project_id = workspace.get_shared_state("project_id")

Direct Agent Communication

# Send message to specific agent
response = await workspace.send_message_to_agent(
    participant_id="agent-123",
    message="Analyze this data"
)

# Broadcast to all agents
responses = await workspace.broadcast_message(
    "Please provide status updates"
)

For detailed workspace documentation, see the Workspace Guide.


Complete Examples

Ready to dive deeper? Check out the complete, runnable examples here


Copyright © 2025 Synq Team. All rights reserved.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

synqed-1.0.10.tar.gz (126.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

synqed-1.0.10-py3-none-any.whl (38.9 kB view details)

Uploaded Python 3

File details

Details for the file synqed-1.0.10.tar.gz.

File metadata

  • Download URL: synqed-1.0.10.tar.gz
  • Upload date:
  • Size: 126.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for synqed-1.0.10.tar.gz
Algorithm Hash digest
SHA256 33c1f8c6b9f8216bcece9bfa0b4ad5fcb5c5f5532fcc5f7125a07067ac302382
MD5 db2a1b22d9fb265a40b4723d1afa8191
BLAKE2b-256 a4af61083f5aa6283bfa6c2f2da22d0ecd5a268dea880754fff20aa9b8b8ecb7

See more details on using hashes here.

File details

Details for the file synqed-1.0.10-py3-none-any.whl.

File metadata

  • Download URL: synqed-1.0.10-py3-none-any.whl
  • Upload date:
  • Size: 38.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for synqed-1.0.10-py3-none-any.whl
Algorithm Hash digest
SHA256 e9961e94ee1f367fccbd24348b3c3134118f0d7b0256d1a9386f074c66b3263d
MD5 9692b55f22ef5ce7b57289a914b87af5
BLAKE2b-256 c706a78fc096bd8d01f8cd283e488e5d9e88d684cad5ff33362f5f863d8d735b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page