Skip to main content

LLM orchestration frameworks for model-agnostic AI agents that handle complex outbound workflows

Project description

Overview

python ver pyenv ver

An LLM orchestration frameworks for multi-agent systems with RAG to autopilot outbound workflows.

Agents are model agnostic.

Messaging workflows are created at individual level, and will be deployed on third-party services via Composio.

Visit:

Mindmap

LLM-powered agents and teams use tools and their own knowledge to complete the task given by the client or the system.

mindmap


Table of Content


Key Features

A mulit-agent systems with Rag that tailors messaging workflow, predicts its performance, and deploys it on third-party tools.

The agent is model agnostic. The default model is set Chat GTP 4o. We ask the client their preference and switch it accordingly using llm variable stored in the BaseAgent class.

Multiple agents can form a team to complete complex tasks together.

1. Analysis

  • Professional agents handle the analysis tasks on each client, customer, and product.

2. Messaging Workflow Creation

  • Several teams receive the analysis and design initial messaging workflow with several layers.
  • Ask the client for their inputs
  • Deploy the workflow on the third party tools using composio.

3. Autopiloting

  • Responsible agents or teams autopilot executing and refining the messaging workflow.

Technologies Used

Schema, Database, Data Validation

  • Pydantic: Data validation and serialization library for Python
  • Pydantic_core: Core func packages for Pydantic
  • Chroma DB: Vector database for storing and querying usage data
  • SQLite: C-language library to implements a small SQL database engine
  • Upstage: Document processer for ML tasks. (Use Document Parser API to extract data from documents)

LLM-curation

  • OpenAI GPT-4: Advanced language model for analysis and recommendations
  • LiteLLM: Curation platform to access LLMs

Tools

  • Composio: Conect RAG agents with external tools, Apps, and APIs to perform actions and receive triggers. We use tools and RAG tools from Composio toolset.

Deployment

  • Python: Primary programming language. We use 3.12 in this project
  • uv: Python package installer and resolver
  • pre-commit: Manage and maintain pre-commit hooks
  • setuptools: Build python modules

Project Structure

.
src/
└── versionHQ/                  # Orchestration frameworks on Pydantic
│      ├── agent/
│      └── llm/
│      └── task/
│      └── team/
│      └── tool/
│      └── clients/            # Classes to store the client related information
│      └── cli/                # CLI commands
│      └── ...
│      │
│      ├── db/                 # Database files
│      ├── chroma.sqlite3
│      └── ...
│
└──tests/
      └── cli/   
      └── team/
      └── ...
      │        
      └── uploads/    # Uploaded files for the project

Setup

  1. Install the uv package manager:

    brew install uv
    
  2. Install dependencies:

    uv venv
    source .venv/bin/activate
    
    uv pip install -r requirements.txt -v
    
  • In case of AssertionError/module mismatch, run Python version control using .pyenv
    pyenv install 3.13.1
    pyenv global 3.13.1  (optional: `pyenv global system` to get back to the system default ver.)
    uv python pin 3.13.1
    
  1. Set up environment variables: Create a .env file in the project root and add the following:
    OPENAI_API_KEY=your-openai-api-key
    LITELLM_API_KEY=your-litellm-api-key
    UPSTAGE_API_KEY=your-upstage-api-key
    COMPOSIO_API_KEY=your-composio-api-key
    COMPOSIO_CLI_KEY=your-composio-cli-key
    

Usage

  1. Add features.

  2. Test the features using the tests directory.

  • Add a file to the tests directory.
  • Run a test.
uv run <your file name>
  • All the .py files' names in the tests have to be ended with _test.py.
  1. Run a React demo app: React demo app to check it on the client endpoint.

    npm i
    npm start
    

    The frontend will be available at http://localhost:3000.

  2. production is available at https://versi0n.io. Currently, we are running beta.

Installing as a Package Module (Alpha)

  1. Open another terminal, set your repository as root, and run
uv pip install git+https://github.com/versionHQ/multi-agent-system.git#egg=versionhq
  1. You can use the versionhq module in your Python app.
from versionhq.agent.model import Agent
agent = Agent(llm="your-llm"...)

Contributing & Customizing

  1. Fork the repository

  2. Create your feature branch (git checkout -b feature/your-amazing-feature)

  3. Pull the latest version of source code from the main branch (git pull origin main) *Address conflicts if any.

  4. Commit your changes (git add . / git commit -m 'Add your-amazing-feature')

  5. Push to the branch (git push origin feature/your-amazing-feature)

  6. Open a pull request

  7. Flag with #! REFINEME for any improvements and #! FIXME for any errors.

Customizing AI Agents

To add an agent, use sample directory to add new project. You can define an agent with a specific role, goal, and set of tools.

Your new agent needs to follow the Agent model defined in the verionhq.agent.model.py.

You can also add any fields and functions to the Agent model universally by modifying verionhq.agent.model.py.

Modifying RAG Functionality

The RAG system uses Chroma DB to store and query past campaign dataset. To update the knowledge base:

  1. Add new files to the uploads/ directory. (This will not be pushed to Github.)
  2. Modify the tools.py file to update the ingestion process if necessary.
  3. Run the ingestion process to update the Chroma DB.

Package Management with uv

  • Add a package: uv add <package>
  • Remove a package: uv remove <package>
  • Run a command in the virtual environment: uv run <command>
  • After updating dependencies, update requirements.txt accordingly or run uv pip freeze > requirements.txt

Pre-Commit Hooks

  1. Install pre-commit hooks:

    uv run pre-commit install
    
  2. Run pre-commit checks manually:

    uv run pre-commit run --all-files
    

Pre-commit hooks help maintain code quality by running checks for formatting, linting, and other issues before each commit.

  • To skip pre-commit hooks (NOT RECOMMENDED)
    git commit --no-verify -m "your-commit-message"
    

Trouble Shooting

Common issues and solutions:

  • API key errors: Ensure all API keys in the .env file are correct and up to date. Make sure to add load_dotenv() on the top of the python file to apply the latest environment values.
  • Database connection issues: Check if the Chroma DB is properly initialized and accessible.
  • Memory errors: If processing large contracts, you may need to increase the available memory for the Python process.
  • Issues related to dependencies:rm -rf .venv uv.lock, uv cache clean and run uv run pip install -r requirements.txt -v.
  • Issues related to the AI agents or RAG system: Check the output.log file for detailed error messages and stack traces.
  • Issues related to Python quit unexpectedly: Check this stackoverflow article.

Frequently Asked Questions (FAQ)

Q. Where can I see if the agent is working?

A. You can find a frontend app here with real-world outbound use cases. You can also test features here using React app.

Q. How do you analyze the customer?

A. We employ soft clustering for each customer.

Q. When should I use a team vs an agent?

A. In essence, use a team for intricate, evolving projects, and agents for quick, straightforward tasks.

Use a team when:

Complex tasks: You need to complete multiple, interconnected tasks that require sequential or hierarchical processing.

Iterative refinement: You want to iteratively improve upon the output through multiple rounds of feedback and revision.

Use an agent when:

Simple tasks: You have a straightforward, one-off task that doesn't require significant complexity or iteration.

Human input: You need to provide initial input or guidance to the agent, or you expect to review and refine the output.

<--- Remaining tasks --->

  • llm handling - agent

  • more llms integration

  • simpler prompting

  • broader knowledge

  • utils - log

  • utils - time

  • end to end client app test

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

versionhq-1.1.6.3.tar.gz (116.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

versionhq-1.1.6.3-py3-none-any.whl (41.9 kB view details)

Uploaded Python 3

File details

Details for the file versionhq-1.1.6.3.tar.gz.

File metadata

  • Download URL: versionhq-1.1.6.3.tar.gz
  • Upload date:
  • Size: 116.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for versionhq-1.1.6.3.tar.gz
Algorithm Hash digest
SHA256 082955e2f46a66613734a3dfe43156b78e39747aa1e75f47965b73c40a9433fd
MD5 bcf394177a3f1f63396b14c1617c1893
BLAKE2b-256 ae3ed78def32a4cd01db0b60c574879fa99e939c772ebbbf462c2998aa84273b

See more details on using hashes here.

File details

Details for the file versionhq-1.1.6.3-py3-none-any.whl.

File metadata

  • Download URL: versionhq-1.1.6.3-py3-none-any.whl
  • Upload date:
  • Size: 41.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for versionhq-1.1.6.3-py3-none-any.whl
Algorithm Hash digest
SHA256 36f7f2ffbd1009449cbf73084c08f499f907ccbd2a636960e60ed975d629ae46
MD5 d7a0dd7fe2bb210a3b6ec35987c8a585
BLAKE2b-256 8f3c86b80b8c92c075c548c43acbb6cfbb2502f730d14ff96915bc30390075aa

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page