No project description provided
Project description
Development
# Host
python run_host.py
# Worker Runtime
uvicorn src.AgentOpera.main:app --host 0.0.0.0 --port 8000 --reload
# Release
rm -rf dist build hatch build twine upload dist/*
# Chat
curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{
"id": "ZWToutqeUawzfaR7",
"messages": [{
"role": "user",
"content": "introduce TensorOpera AI, please use 10000 words",
"parts": [{
"type": "text",
"text": "tensoropera ai"
}]
}],
"model": "chainopera-default",
"group": "extreme"
}'
Multi Agent Orchestration, Distributed Agent Runtime Example
This repository is an example of how to run a distributed agent runtime. The system is composed of three main components:
- The agent host runtime, which is responsible for managing the eventing engine, and the pub/sub message system.
- The worker runtime, which is responsible for the lifecycle of the distributed agents, including the "semantic router".
- The user proxy, which is responsible for managing the user interface and the user interactions with the agents.
Example Scenario
In this example, we have a simple scenario where we have a set of distributed agents (an "HR", and a "Finance" agent) which an enterprise may use to manage their HR and Finance operations. Each of these agents are independent, and can be running on different machines. While many multi-agent systems are built to have the agents collaborate to solve a difficult task - the goal of this example is to show how an enterprise may manage a large set of agents that are suited to individual tasks, and how to route a user to the most relevant agent for the task at hand.
The way this system is designed, when a user initiates a session, the semantic router agent will identify the intent of the user (currently using the overly simple method of string matching), identify the most relevant agent, and then route the user to that agent. The agent will then manage the conversation with the user, and the user will be able to interact with the agent in a conversational manner.
While the logic of the agents is simple in this example, the goal is to show how the distributed runtime capabilities of autogen supports this scenario independantly of the capabilities of the agents themselves.
Getting Started
- Install
autogen-coreand its dependencies
To run
Since this example is meant to demonstrate a distributed runtime, the components of this example are meant to run in different processes - i.e. different terminals.
In 2 separate terminals, run:
# Terminal 1, to run the Agent Host Runtime
python run_host.py
# Terminal 2, to run the Worker Runtime
python run_semantic_router.py
The first terminal should log a series of events where the vrious agents are registered against the runtime.
In the second terminal, you may enter a request related to finance or hr scenarios. In our simple example here, this means using one of the following keywords in your request:
- For the finance agent: "finance", "money", "budget"
- For the hr agent: "hr", "human resources", "employee"
You will then see the host and worker runtimes send messages back and forth, routing to the correct agent, before the final response is printed.
The conversation can then continue with the selected agent until the user sends a message containing "END",at which point the agent will be disconnected from the user and a new conversation can start.
Message Flow
Using the "Topic" feature of the agent host runtime, the message flow of the system is as follows:
sequenceDiagram
participant User
participant Closure_Agent
participant User_Proxy_Agent
participant Semantic_Router
participant Worker_Agent
User->>User_Proxy_Agent: Send initial message
Semantic_Router->>Worker_Agent: Route message to appropriate agent
Worker_Agent->>User_Proxy_Agent: Respond to user message
User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent
Closure_Agent->>User: Expose the response to the User
User->>Worker_Agent: Directly send follow up message
Worker_Agent->>User_Proxy_Agent: Respond to user message
User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent
Closure_Agent->>User: Return response
User->>Worker_Agent: Send "END" message
Worker_Agent->>User_Proxy_Agent: Confirm session end
User_Proxy_Agent->>Closure_Agent: Confirm session end
Closure_Agent->>User: Display session end message
🚀 Launching the Docker Container for Agents
✅ Build the Docker Image
Ensure you're in the project root directory:
docker build -t streaming_agent_app .
✅ Run the Docker Container
Launch the container and expose required ports:
docker run -d -p 8000:8000 -p 50051:50051 --name streaming_agent_container streaming_agent_app
✅ Check Logs and Running Services
To check logs:
docker logs streaming_agent_container
To verify running services:
docker exec -it streaming_agent_container supervisorctl status
🔗 cURL Command to Test the Endpoint
Send a POST request to the /chat endpoint to test the service:
✅ Single-line Command
curl -X POST http://localhost:8000/chat/stream -H "Content-Type: application/json" -d '{"message": "Research history of AI?", "id": "test", }'
curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{
"id": "ZWToutqeUawzfaR7",
"messages": [{
"role": "user",
"content": "introduce TensorOpera AI, please use 10000 words",
"parts": [{
"type": "text",
"text": "tensoropera ai"
}]
}],
"model": "chainopera-default",
"group": "extreme"
}'
✅ Multi-line Command for zsh/bash
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "What is our company\'s vacation policy?",
"user_id": "test"
}'
✅ Expected Response
If everything works correctly, you should see a JSON response like:
{
"message": "Our vacation policy allows employees to take up to 20 days of paid leave annually.",
"status": "completed",
"is_final": true,
"user_id": "test",
"conversation_id": "1234-5678"
}
❓ Troubleshooting
-
Ensure the container is running with:
docker ps -
Check for errors in the logs:
docker logs streaming_agent_container
-
Verify that the
/chatendpoint is accessible:curl -I http://localhost:8000/chat
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file agentopera-0.0.3.tar.gz.
File metadata
- Download URL: agentopera-0.0.3.tar.gz
- Upload date:
- Size: 356.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f411d807a8f30c7d6d87081cc8c8377691c6b4a654c358c73b6e473229290daf
|
|
| MD5 |
990c7e5f47d60f9cd6fbd379b766aa9e
|
|
| BLAKE2b-256 |
7524b877cd3840c9dbf00fd4fabe7e136d4f110a568390155c9aa31a814b4bce
|
File details
Details for the file agentopera-0.0.3-py3-none-any.whl.
File metadata
- Download URL: agentopera-0.0.3-py3-none-any.whl
- Upload date:
- Size: 467.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6fa3be4b7d6ca7053af3996399f5d4c99c5308c762e3184d72d72957802b5de2
|
|
| MD5 |
4420f191e72e573b601219a754413bdf
|
|
| BLAKE2b-256 |
4f34f19d8a0d5c1be803d483a3607db7de8c860ac0ec766c689d6d628990dd13
|