LLM orchestration frameworks for model-agnostic AI agents that handle complex outbound workflows
Project description
Overview
An LLM orchestration frameworks for multi-agent systems with RAG to autopilot outbound workflows.
Agents are model agnostic.
Messaging workflows are created at individual level, and will be deployed on third-party services via Composio.
Visit:
- PyPI
- Github (LLM orchestration)
- Github (Test client app)
- Use case - client app (alpha)
Mindmap
LLM-powered agents and teams use tools and their own knowledge to complete the task given by the client or the system.
Table of Content
- Key Features
- Usage
- Technologies Used
- Project Structure
- Setup
- Contributing
- Trouble Shooting
- Frequently Asked Questions (FAQ)
Key Features
A mulit-agent systems with Rag that tailors messaging workflow, predicts its performance, and deploys it on third-party tools.
The agent is model agnostic. The default model is set Chat GTP 4o. We ask the client their preference and switch it accordingly using llm variable stored in the BaseAgent class.
Multiple agents can form a team to complete complex tasks together.
1. Analysis
- Professional
agentshandle the analysistaskson each client, customer, and product.
2. Messaging Workflow Creation
- Several
teamsreceive the analysis and design initial messaging workflow with several layers. - Ask the client for their inputs
- Deploy the workflow on the third party tools using
composio.
3. Autopiloting
- Responsible
agentsorteamsautopilot executing and refining the messaging workflow.
Usage
-
Install
versionhqpackage:uv pip install versionhq -
You can use the
versionhqmodule in your Python app.
Case 1. Build an AI agent on LLM of your choice and execute a task:
from versionhq.agent.model import Agent
from versionhq.task.model import Task, ResponseField
agent = Agent(
role="demo",
goal="amazing project goal",
skillsets=["skill_1", "skill_2", ],
tools=["amazing RAG tool",]
llm="llm-of-your-choice"
)
task = Task(
description="Amazing task",
expected_output_json=True,
expected_output_pydantic=False,
output_field_list=[
ResponseField(title="test1", type=str, required=True),
ResponseField(title="test2", type=list, required=True),
],
callback=None,
)
res = task.execute_sync(agent=agent, context="amazing context to consider.")
return res.to_dict()
This will return a dictionary with keys defined in the ResponseField.
{ test1: "answer1", "test2": ["answer2-1", "answer2-2", "answer2-3",] }
Case 2. Form a team to handle multiple tasks:
from versionhq.agent.model import Agent
from versionhq.task.model import Task, ResponseField
from versionhq.team.model import Team, TeamMember
agent_a = Agent(role="agent a", goal="My amazing goals", llm="llm-of-your-choice")
agent_b = Agent(role="agent b", goal="My amazing goals", llm="llm-of-your-choice")
task_1 = Task(
description="Analyze the client's business model.",
output_field_list=[ResponseField(title="test1", type=str, required=True),],
allow_delegation=True
)
task_2 = Task(
description="Define the cohort.",
output_field_list=[ResponseField(title="test1", type=int, required=True),],
allow_delegation=False
)
team = Team(
members=[
TeamMember(agent=agent_a, is_manager=False, task=task_1),
TeamMember(agent=agent_b, is_manager=True, task=task_2),
],
)
res = team.kickoff()
This will return a list with dictionaries with keys defined in the ResponseField of each task.
Tasks can be delegated to a team manager, peers in the team, or completely new agent.
Technologies Used
Schema, Database, Data Validation
- Pydantic: Data validation and serialization library for Python
- Pydantic_core: Core func packages for Pydantic
- Chroma DB: Vector database for storing and querying usage data
- SQLite: C-language library to implements a small SQL database engine
- Upstage: Document processer for ML tasks. (Use
Document Parser APIto extract data from documents)
LLM-curation
- OpenAI GPT-4: Advanced language model for analysis and recommendations
- LiteLLM: Curation platform to access LLMs
Tools
- Composio: Conect RAG agents with external tools, Apps, and APIs to perform actions and receive triggers. We use tools and RAG tools from Composio toolset.
Deployment
- Python: Primary programming language. We use 3.12 in this project
- uv: Python package installer and resolver
- pre-commit: Manage and maintain pre-commit hooks
- setuptools: Build python modules
Project Structure
.
src/
└── versionHQ/ # Orchestration frameworks on Pydantic
│ ├── agent/
│ └── llm/
│ └── task/
│ └── team/
│ └── tool/
│ └── clients/ # Classes to store the client related information
│ └── cli/ # CLI commands
│ └── ...
│ │
│ ├── db/ # Database files
│ ├── chroma.sqlite3
│ └── ...
│
└──tests/
└── cli/
└── team/
└── ...
│
└── uploads/ # Uploaded files for the project
Setup
-
Install the
uvpackage manager:brew install uv -
Install dependencies:
uv venv source .venv/bin/activate uv pip sync
- In case of AssertionError/module mismatch, run Python version control using
.pyenvpyenv install 3.13.1 pyenv global 3.13.1 (optional: `pyenv global system` to get back to the system default ver.) uv python pin 3.13.1
- Set up environment variables:
Create a
.envfile in the project root and add the following:OPENAI_API_KEY=your-openai-api-key LITELLM_API_KEY=your-litellm-api-key UPSTAGE_API_KEY=your-upstage-api-key COMPOSIO_API_KEY=your-composio-api-key COMPOSIO_CLI_KEY=your-composio-cli-key
Contributing
-
Fork the repository
-
Create your feature branch (
git checkout -b feature/your-amazing-feature) -
Create amazing features
-
Test the features using the
testsdirectory.- Add a test function to respective components in the
testsdirectory. - Add your
LITELLM_API_KEYandOPENAI_API_KEYto the Githubrepository secrets@ settings > secrets & variables > Actions. - Run a test.
uv run pytest tests -vv
pytest
- When adding a new file to
tests, name the file ended with_test.py. - When adding a new feature to the file, name the feature started with
test_.
- Add a test function to respective components in the
-
Pull the latest version of source code from the main branch (
git pull origin main) *Address conflicts if any. -
Commit your changes (
git add ./git commit -m 'Add your-amazing-feature') -
Push to the branch (
git push origin feature/your-amazing-feature) -
Open a pull request
Optional
-
Flag with
#! REFINEMEfor any improvements needed and#! FIXMEfor any errors. -
Run a React demo app: React demo app to check it on the client endpoint.
npm i npm startThe frontend will be available at
http://localhost:3000. -
productionis available athttps://versi0n.io. Currently, we are running alpha test.
Customizing AI Agents
To add an agent, use sample directory to add new project. You can define an agent with a specific role, goal, and set of tools.
Your new agent needs to follow the Agent model defined in the verionhq.agent.model.py.
You can also add any fields and functions to the Agent model universally by modifying verionhq.agent.model.py.
Modifying RAG Functionality
The RAG system uses Chroma DB to store and query past campaign dataset. To update the knowledge base:
- Add new files to the
uploads/directory. (This will not be pushed to Github.) - Modify the
tools.pyfile to update the ingestion process if necessary. - Run the ingestion process to update the Chroma DB.
Package Management with uv
- Add a package:
uv add <package> - Remove a package:
uv remove <package> - Run a command in the virtual environment:
uv run <command>
- After updating dependencies, update
requirements.txtaccordingly or runuv pip freeze > requirements.txt
Pre-Commit Hooks
-
Install pre-commit hooks:
uv run pre-commit install -
Run pre-commit checks manually:
uv run pre-commit run --all-files
Pre-commit hooks help maintain code quality by running checks for formatting, linting, and other issues before each commit.
- To skip pre-commit hooks (NOT RECOMMENDED)
git commit --no-verify -m "your-commit-message"
Trouble Shooting
Common issues and solutions:
- API key errors: Ensure all API keys in the
.envfile are correct and up to date. Make sure to addload_dotenv()on the top of the python file to apply the latest environment values. - Database connection issues: Check if the Chroma DB is properly initialized and accessible.
- Memory errors: If processing large contracts, you may need to increase the available memory for the Python process.
- Issues related to dependencies:
rm -rf uv.lock,uv cache clean,uv venv, and runuv pip install -r requirements.txt -v. - Issues related to the AI agents or RAG system: Check the
output.logfile for detailed error messages and stack traces. - Issues related to
Python quit unexpectedly: Check this stackoverflow article. reportMissingImportserror from pyright after installing the package: This might occur when installing new libraries while VSCode is running. Open the command pallete (ctrl + shift + p) and run the Python: Restart language server task.
Frequently Asked Questions (FAQ)
Q. Where can I see if the agent is working?
A. You can find a frontend app here with real-world outbound use cases. You can also test features here using React app.
Q. How do you analyze the customer?
A. We employ soft clustering for each customer.
Q. When should I use a team vs an agent?
A. In essence, use a team for intricate, evolving projects, and agents for quick, straightforward tasks.
Use a team when:
Complex tasks: You need to complete multiple, interconnected tasks that require sequential or hierarchical processing.
Iterative refinement: You want to iteratively improve upon the output through multiple rounds of feedback and revision.
Use an agent when:
Simple tasks: You have a straightforward, one-off task that doesn't require significant complexity or iteration.
Human input: You need to provide initial input or guidance to the agent, or you expect to review and refine the output.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file versionhq-1.1.8.1.tar.gz.
File metadata
- Download URL: versionhq-1.1.8.1.tar.gz
- Upload date:
- Size: 124.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1f41b699a8fe04a256ef561e79712dfa83ca9d7303a8c130dacd645c59d49125
|
|
| MD5 |
164cbb94b203640c39b531a1a30bc966
|
|
| BLAKE2b-256 |
07309d807ddcfc4e0a76179d2057d856b12ef8c9d5cd87cd6fb1b9f401ae8779
|
File details
Details for the file versionhq-1.1.8.1-py3-none-any.whl.
File metadata
- Download URL: versionhq-1.1.8.1-py3-none-any.whl
- Upload date:
- Size: 48.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e60227b7751dc0e27778f980489e66bcafc920499350885754001ec3209108c2
|
|
| MD5 |
548dd6d60526f40f4dd9e5dc20976581
|
|
| BLAKE2b-256 |
268dce8813b3166883c8016d084e2f5f91a426b91a800602037758c5d1ec0e04
|