Skip to main content

No project description provided

Project description

Development

# Host
python run_host.py

# Worker Runtime
uvicorn src.AgentOpera.main:app --host 0.0.0.0 --port 8000 --reload

# Chat
curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{
	"id": "ZWToutqeUawzfaR7",
	"messages": [{
		"role": "user",
		"content": "introduce TensorOpera AI, please use 10000 words",
		"parts": [{
			"type": "text",
			"text": "tensoropera ai"
		}]
	}],
	"model": "chainopera-default",
	"group": "extreme"
}'

Multi Agent Orchestration, Distributed Agent Runtime Example

This repository is an example of how to run a distributed agent runtime. The system is composed of three main components:

  1. The agent host runtime, which is responsible for managing the eventing engine, and the pub/sub message system.
  2. The worker runtime, which is responsible for the lifecycle of the distributed agents, including the "semantic router".
  3. The user proxy, which is responsible for managing the user interface and the user interactions with the agents.

Example Scenario

In this example, we have a simple scenario where we have a set of distributed agents (an "HR", and a "Finance" agent) which an enterprise may use to manage their HR and Finance operations. Each of these agents are independent, and can be running on different machines. While many multi-agent systems are built to have the agents collaborate to solve a difficult task - the goal of this example is to show how an enterprise may manage a large set of agents that are suited to individual tasks, and how to route a user to the most relevant agent for the task at hand.

The way this system is designed, when a user initiates a session, the semantic router agent will identify the intent of the user (currently using the overly simple method of string matching), identify the most relevant agent, and then route the user to that agent. The agent will then manage the conversation with the user, and the user will be able to interact with the agent in a conversational manner.

While the logic of the agents is simple in this example, the goal is to show how the distributed runtime capabilities of autogen supports this scenario independantly of the capabilities of the agents themselves.

Getting Started

  1. Install autogen-core and its dependencies

To run

Since this example is meant to demonstrate a distributed runtime, the components of this example are meant to run in different processes - i.e. different terminals.

In 2 separate terminals, run:

# Terminal 1, to run the Agent Host Runtime
python run_host.py
# Terminal 2, to run the Worker Runtime
python run_semantic_router.py

The first terminal should log a series of events where the vrious agents are registered against the runtime.

In the second terminal, you may enter a request related to finance or hr scenarios. In our simple example here, this means using one of the following keywords in your request:

  • For the finance agent: "finance", "money", "budget"
  • For the hr agent: "hr", "human resources", "employee"

You will then see the host and worker runtimes send messages back and forth, routing to the correct agent, before the final response is printed.

The conversation can then continue with the selected agent until the user sends a message containing "END",at which point the agent will be disconnected from the user and a new conversation can start.

Message Flow

Using the "Topic" feature of the agent host runtime, the message flow of the system is as follows:

sequenceDiagram
    participant User
    participant Closure_Agent
    participant User_Proxy_Agent
    participant Semantic_Router
    participant Worker_Agent

    User->>User_Proxy_Agent: Send initial message
    Semantic_Router->>Worker_Agent: Route message to appropriate agent
    Worker_Agent->>User_Proxy_Agent: Respond to user message
    User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent
    Closure_Agent->>User: Expose the response to the User
    User->>Worker_Agent: Directly send follow up message
    Worker_Agent->>User_Proxy_Agent: Respond to user message
    User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent
    Closure_Agent->>User: Return response
    User->>Worker_Agent: Send "END" message
    Worker_Agent->>User_Proxy_Agent: Confirm session end
    User_Proxy_Agent->>Closure_Agent: Confirm session end
    Closure_Agent->>User: Display session end message

🚀 Launching the Docker Container for Agents

Build the Docker Image

Ensure you're in the project root directory:

docker build -t streaming_agent_app .

Run the Docker Container

Launch the container and expose required ports:

docker run -d -p 8000:8000 -p 50051:50051 --name streaming_agent_container streaming_agent_app

Check Logs and Running Services

To check logs:

docker logs streaming_agent_container

To verify running services:

docker exec -it streaming_agent_container supervisorctl status

🔗 cURL Command to Test the Endpoint

Send a POST request to the /chat endpoint to test the service:

Single-line Command

curl -X POST http://localhost:8000/chat/stream -H "Content-Type: application/json" -d '{"message": "Research history of AI?", "id": "test", }'


curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{
	"id": "ZWToutqeUawzfaR7",
	"messages": [{
		"role": "user",
		"content": "introduce TensorOpera AI, please use 10000 words",
		"parts": [{
			"type": "text",
			"text": "tensoropera ai"
		}]
	}],
	"model": "chainopera-default",
	"group": "extreme"
}'

Multi-line Command for zsh/bash

curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{
  "message": "What is our company\'s vacation policy?",
  "user_id": "test"
}'

Expected Response

If everything works correctly, you should see a JSON response like:

{
  "message": "Our vacation policy allows employees to take up to 20 days of paid leave annually.",
  "status": "completed",
  "is_final": true,
  "user_id": "test",
  "conversation_id": "1234-5678"
}

Troubleshooting

  • Ensure the container is running with:

    docker ps
    
  • Check for errors in the logs:

    docker logs streaming_agent_container
    
  • Verify that the /chat endpoint is accessible:

    curl -I http://localhost:8000/chat
    

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentopera-0.0.2-py3-none-any.whl (467.6 kB view details)

Uploaded Python 3

File details

Details for the file agentopera-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: agentopera-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 467.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.2

File hashes

Hashes for agentopera-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 53b1a6e55fcf720e5ced0d09d890b08fe164162d28da7a54c46171452e89991f
MD5 fb189b5a48318f5b0886e463ca99d852
BLAKE2b-256 0a7d0aa575b568d794a000b5d7f4b9285ac1d0f6ef084d0d40993cf3c67d68ab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page