LLM orchestration frameworks for model-agnostic AI agents that handle complex outbound workflows
Project description
Overview
LLM orchestration frameworks to deploy multi-agent systems focusing on complex outbound tasks.
Visit:
- PyPI
- Github (LLM orchestration framework)
- Github (Test client app)
- Use case / Quick demo
- Documentation *Some components are under review.
Table of Content
- Key Features
- Quick Start
- Technologies Used
- Project Structure
- Setup
- Contributing
- Trouble Shooting
- Frequently Asked Questions (FAQ)
Key Features
Generate mulit-agent systems depending on the complexity of the task, and execute the task with agents of choice.
Model-agnostic agents can handle RAG tools, tools, callbacks, and knowledge sharing among other agents.
Agent formation
Depending on the task complexity, agents can make a different formation.
You can specify which formation you want them to generate, or let the agent decide if you don’t have a clear plan.
| Solo Agent | Supervising | Network | Random | |
|---|---|---|---|---|
| Formation | ||||
| Usage |
|
|
|
|
| Use case | An email agent drafts promo message for the given audience. | The leader agent strategizes an outbound campaign plan and assigns components such as media mix or message creation to subordinate agents. | An email agent and social media agent share the product knowledge and deploy multi-channel outbound campaign. | 1. An email agent drafts promo message for the given audience, asking insights on tones from other email agents which oversee other clusters. 2. An agent calls the external agent to deploy the campaign. |
Quick Start
Install versionhq package:
pip install versionhq
(Python >= 3.13)
Case 1. Solo Agent:
Return a structured output with a summary in string.
from pydantic import BaseModel
from versionhq.agent.model import Agent
from versionhq.task.model import Task
class CustomOutput(BaseModel):
test1: str
test2: list[str]
def dummy_func(message: str, test1: str, test2: list[str]) -> str:
return f"{message}: {test1}, {", ".join(test2)}"
agent = Agent(role="demo", goal="amazing project goal")
task = Task(
description="Amazing task",
pydantic_custom_output=CustomOutput,
callback=dummy_func,
callback_kwargs=dict(message="Hi! Here is the result: ")
)
res = task.execute_sync(agent=agent, context="amazing context to consider.")
print(res)
This will return TaskOutput that stores a response in string, JSON dict, and Pydantic model: CustomOutput formats with a callback result.
res == TaskOutput(
raw="{\\"test1\\": \\"random str\\", \\"test2\\": [\\"item1\\", \\"item2\\"]}",
json_dict={"test1": "random str", "test2": ["item1", "item2"]},
pydantic=CustomOutput(test1="random str", test2=["item 1", "item 2"]),
callback_output="Hi! Here is the result: random str, item 1, item 2",
)
Case 2. Supervising:
from versionhq.agent.model import Agent
from versionhq.task.model import Task, ResponseField
from versionhq.team.model import Team, TeamMember
agent_a = Agent(role="agent a", goal="My amazing goals", llm="llm-of-your-choice")
agent_b = Agent(role="agent b", goal="My amazing goals", llm="llm-of-your-choice")
task_1 = Task(
description="Analyze the client's business model.",
response_fields=[ResponseField(title="test1", data_type=str, required=True),],
allow_delegation=True
)
task_2 = Task(
description="Define the cohort.",
response_fields=[ResponseField(title="test1", data_type=int, required=True),],
allow_delegation=False
)
team = Team(
members=[
TeamMember(agent=agent_a, is_manager=False, task=task_1),
TeamMember(agent=agent_b, is_manager=True, task=task_2),
],
)
res = team.kickoff()
This will return a list with dictionaries with keys defined in the ResponseField of each task.
Tasks can be delegated to a team manager, peers in the team, or completely new agent.
Technologies Used
Schema, Database, Data Validation
- Pydantic: Data validation and serialization library for Python
- Pydantic_core: Core func packages for Pydantic
- Chroma DB: Vector database for storing and querying usage data
- SQLite: C-language library to implements a small SQL database engine
- Upstage: Document processer for ML tasks. (Use
Document Parser APIto extract data from documents)
LLM-curation
- OpenAI GPT-4: Advanced language model for analysis and recommendations
- LiteLLM: Curation platform to access LLMs
Tools
- Composio: Conect RAG agents with external tools, Apps, and APIs to perform actions and receive triggers. We use tools and RAG tools from Composio toolset.
Deployment
- Python: Primary programming language. v3.13 is recommended.
- uv: Python package installer and resolver
- pre-commit: Manage and maintain pre-commit hooks
- setuptools: Build python modules
Project Structure
.
src/
└── versionHQ/ # Orchestration frameworks on Pydantic
│ ├── agent/
│ └── llm/
│ └── task/
│ └── team/
│ └── tool/
│ └── clients/ # Classes to store the client related information
│ └── cli/ # CLI commands
│ └── ...
│ │
│ ├── db/ # Database files
│ ├── chroma.sqlite3
│ └── ...
│
└──tests/
└── cli/
└── team/
└── ...
│
└── uploads/ # Uploaded files for the project
Setup
-
Install the
uvpackage manager:brew install uv -
Install dependencies:
uv venv source .venv/bin/activate uv pip sync
- In case of AssertionError/module mismatch, run Python version control using
.pyenvpyenv install 3.13.1 pyenv global 3.13.1 (optional: `pyenv global system` to get back to the system default ver.) uv python pin 3.13.1
- Set up environment variables:
Create a
.envfile in the project root and add the following:OPENAI_API_KEY=your-openai-api-key LITELLM_API_KEY=your-litellm-api-key UPSTAGE_API_KEY=your-upstage-api-key COMPOSIO_API_KEY=your-composio-api-key COMPOSIO_CLI_KEY=your-composio-cli-key
Contributing
-
Create your feature branch (
git checkout -b feature/your-amazing-feature) -
Create amazing features
-
Test the features using the
testsdirectory.- Add a test function to respective components in the
testsdirectory. - Add your
LITELLM_API_KEY,OPENAI_API_KEY,COMPOSIO_API_KEY,DEFAULT_USER_IDto the Githubrepository secretslocated at settings > secrets & variables > Actions. - Run a test.
uv run pytest tests -vv --cache-clear
pytest
- When adding a new file to
tests, name the file ended with_test.py. - When adding a new feature to the file, name the feature started with
test_.
- Add a test function to respective components in the
-
Pull the latest version of source code from the main branch (
git pull origin main) *Address conflicts if any. -
Commit your changes (
git add ./git commit -m 'Add your-amazing-feature') -
Push to the branch (
git push origin feature/your-amazing-feature) -
Open a pull request
Optional
-
Flag with
#! REFINEMEfor any improvements needed and#! FIXMEfor any errors. -
Run a React demo app: React demo app to check it on the client endpoint.
npm i npm startThe frontend will be available at
http://localhost:3000. -
productionuse case is available athttps://versi0n.io. Currently, we are running alpha test.
Customizing AI Agents
To add an agent, use sample directory to add new project. You can define an agent with a specific role, goal, and set of tools.
Your new agent needs to follow the Agent model defined in the verionhq.agent.model.py.
You can also add any fields and functions to the Agent model universally by modifying verionhq.agent.model.py.
Modifying RAG Functionality
The RAG system uses Chroma DB to store and query past campaign dataset. To update the knowledge base:
- Add new files to the
uploads/directory. (This will not be pushed to Github.) - Modify the
tools.pyfile to update the ingestion process if necessary. - Run the ingestion process to update the Chroma DB.
Package Management with uv
- Add a package:
uv add <package> - Remove a package:
uv remove <package> - Run a command in the virtual environment:
uv run <command>
- After updating dependencies, update
requirements.txtaccordingly or runuv pip freeze > requirements.txt
Pre-Commit Hooks
-
Install pre-commit hooks:
uv run pre-commit install -
Run pre-commit checks manually:
uv run pre-commit run --all-files
Pre-commit hooks help maintain code quality by running checks for formatting, linting, and other issues before each commit.
- To skip pre-commit hooks (NOT RECOMMENDED)
git commit --no-verify -m "your-commit-message"
Trouble Shooting
Common issues and solutions:
- API key errors: Ensure all API keys in the
.envfile are correct and up to date. Make sure to addload_dotenv()on the top of the python file to apply the latest environment values. - Database connection issues: Check if the Chroma DB is properly initialized and accessible.
- Memory errors: If processing large contracts, you may need to increase the available memory for the Python process.
- Issues related to dependencies:
rm -rf uv.lock,uv cache clean,uv venv, and runuv pip install -r requirements.txt -v. - Issues related to the AI agents or RAG system: Check the
output.logfile for detailed error messages and stack traces. - Issues related to
Python quit unexpectedly: Check this stackoverflow article. reportMissingImportserror from pyright after installing the package: This might occur when installing new libraries while VSCode is running. Open the command pallete (ctrl + shift + p) and run the Python: Restart language server task.
Frequently Asked Questions (FAQ)
Q. Where can I see if the agent is working?
A. You can find a frontend app here with real-world outbound use cases. You can also test features here using React app.
Q. How do you analyze the customer?
A. We employ soft clustering for each customer.
Q. When should I use a team vs an agent?
A. In essence, use a team for intricate, evolving projects, and agents for quick, straightforward tasks.
Use a team when:
Complex tasks: You need to complete multiple, interconnected tasks that require sequential or hierarchical processing.
Iterative refinement: You want to iteratively improve upon the output through multiple rounds of feedback and revision.
Use an agent when:
Simple tasks: You have a straightforward, one-off task that doesn't require significant complexity or iteration.
Human input: You need to provide initial input or guidance to the agent, or you expect to review and refine the output.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file versionhq-1.1.10.8.tar.gz.
File metadata
- Download URL: versionhq-1.1.10.8.tar.gz
- Upload date:
- Size: 156.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5db4bb6a12c41cbb6ffb19d0f562710d3b196398edcf960c4e5b0f395599c4d9
|
|
| MD5 |
3eec81468c932543085581984e9e3a72
|
|
| BLAKE2b-256 |
25defaaadc860f92afea717816e59e51cd3144de5aa650efdf649350d97b4817
|
File details
Details for the file versionhq-1.1.10.8-py3-none-any.whl.
File metadata
- Download URL: versionhq-1.1.10.8-py3-none-any.whl
- Upload date:
- Size: 58.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
32912e7679321335940a95543fcfc63f97f210aba93ee7a1cf5789a62acf3ea1
|
|
| MD5 |
3f653990c3d03f46880874cea36eb6d4
|
|
| BLAKE2b-256 |
16ee1da2d6da86e149b231c9feb8902585c6c9158994a219782b5c67c11d78f2
|