Skip to main content

An agentic orchestration framework for multi-agent system that shares memory, knowledge base, and RAG tools.

Project description

Overview

MIT license Publisher PyPI python ver pyenv ver

Agentic orchestration framework to deploy agent network and handle complex task automation.

Visit:


Table of Content


Key Features

versionhq is a Python framework for agent networks that handle complex task automation without human interaction.

Agents are model-agnostic, and will improve task output, while oprimizing token cost and job latency, by sharing their memory, knowledge base, and RAG tools with other agents in the network.

Agent formation

Agents adapt their formation based on task complexity.

You can specify a desired formation or allow the agents to determine it autonomously (default).

Solo Agent Supervising Network Random
Formation solo solo solo solo
Usage
  • A single agent with tools, knowledge, and memory.
  • When self-learning mode is on - it will turn into Random formation.
  • Leader agent gives directions, while sharing its knowledge and memory.
  • Subordinates can be solo agents or networks.
  • Share tasks, knowledge, and memory among network members.
  • A single agent handles tasks, asking help from other agents without sharing its memory or knowledge.
Use case An email agent drafts promo message for the given audience. The leader agent strategizes an outbound campaign plan and assigns components such as media mix or message creation to subordinate agents. An email agent and social media agent share the product knowledge and deploy multi-channel outbound campaign. 1. An email agent drafts promo message for the given audience, asking insights on tones from other email agents which oversee other clusters. 2. An agent calls the external agent to deploy the campaign.

Quick Start

Install versionhq package:

pip install versionhq

(Python 3.11 or higher)

Generate agent networks and launch task execution:

from versionhq import form_agent_network

network = form_agent_network(
   task="YOUR AMAZING TASK OVERVIEW",
   expected_outcome="YOUR OUTCOME EXPECTATION",
)
res = network.launch()

This will form a network with multiple agents on Formation and return TaskOutput object with output in JSON, plane text, Pydantic model format with evaluation.

Solo Agent:

Return a structured output with a summary in string.

from pydantic import BaseModel
from versionhq import Agent, Task

class CustomOutput(BaseModel):
   test1: str
   test2: list[str]

def dummy_func(message: str, test1: str, test2: list[str]) -> str:
   return f"{message}: {test1}, {", ".join(test2)}"


agent = Agent(role="demo", goal="amazing project goal")

task = Task(
   description="Amazing task",
   pydantic_output=CustomOutput,
   callback=dummy_func,
   callback_kwargs=dict(message="Hi! Here is the result: ")
)

res = task.execute_sync(agent=agent, context="amazing context to consider.")
print(res)

This will return TaskOutput instance that stores a response in plane text, JSON serializable dict, and Pydantic model: CustomOutput formats with a callback result, tool output (if given), and evaluation results (if given).

res == TaskOutput(
   task_id=UUID('<TASK UUID>')
   raw='{\"test1\":\"random str\", \"test2\":[\"str item 1\", \"str item 2\", \"str item 3\"]}',
   json_dict={'test1': 'random str', 'test2': ['str item 1', 'str item 2', 'str item 3']},
   pydantic=<class '__main__.CustomOutput'>
   tool_output=None,
   callback_output='Hi! Here is the result: random str, str item 1, str item 2, str item 3',
   evaluation=None
)

Supervising:

from versionhq import Agent, Task, ResponseField, Team, TeamMember

agent_a = Agent(role="agent a", goal="My amazing goals", llm="llm-of-your-choice")
agent_b = Agent(role="agent b", goal="My amazing goals", llm="llm-of-your-choice")

task_1 = Task(
   description="Analyze the client's business model.",
   response_fields=[ResponseField(title="test1", data_type=str, required=True),],
   allow_delegation=True
)

 task_2 = Task(
   description="Define the cohort.",
   response_fields=[ResponseField(title="test1", data_type=int, required=True),],
   allow_delegation=False
)

team = Team(
   members=[
      TeamMember(agent=agent_a, is_manager=False, task=task_1),
      TeamMember(agent=agent_b, is_manager=True, task=task_2),
   ],
)
res = team.kickoff()

This will return a list with dictionaries with keys defined in the ResponseField of each task.

Tasks can be delegated to a team manager, peers in the team, or completely new agent.


Technologies Used

Schema, Data Validation

  • Pydantic: Data validation and serialization library for Python.
  • Upstage: Document processer for ML tasks. (Use Document Parser API to extract data from documents)
  • Docling: Document parsing

Storage

  • mem0ai: Agents' memory storage and management.
  • Chroma DB: Vector database for storing and querying usage data.
  • SQLite: C-language library to implements a small SQL database engine.

LLM-curation

  • LiteLLM: Curation platform to access LLMs

Tools

  • Composio: Conect RAG agents with external tools, Apps, and APIs to perform actions and receive triggers. We use tools and RAG tools from Composio toolset.

Deployment

  • Python: Primary programming language. v3.13 is recommended.
  • uv: Python package installer and resolver
  • pre-commit: Manage and maintain pre-commit hooks
  • setuptools: Build python modules

Project Structure

.
.github
└── workflows/                # Github actions
│
src/
└── versionhq/                # Orchestration frameworks
│     ├── agent/              # Components
│     └── llm/
│     └── task/
│     └── team/
│     └── tool/
│     └── cli/
│     └── ...
│     │
│     ├── db/                 # Storage
│     ├── chroma.sqlite3
│     └── ...
│
└──tests/                     # Pytest
│     └── agent/
│     └── llm/
│     └── ...
│
└── uploads/                  # Local repo to store the uploaded files


Setup

Set up a project

  1. Install uv package manager:

    For MacOS:

    brew install uv
    

    For Ubuntu/Debian:

    sudo apt-get install uv
    
  2. Install dependencies:

    uv venv
    source .venv/bin/activate
    uv lock --upgrade
    uv sync --all-extras
    
  • In case of AssertionError/module mismatch, run Python version control using .pyenv
    pyenv install 3.12.8
    pyenv global 3.12.8  (optional: `pyenv global system` to get back to the system default ver.)
    uv python pin 3.12.8
    echo 3.12.8 > .python-version
    
  1. Set up environment variables: Create a .env file in the project root and add the following:
    LITELLM_API_KEY=your-litellm-api-key
    OPENAI_API_KEY=your-openai-api-key
    COMPOSIO_API_KEY=your-composio-api-key
    COMPOSIO_CLI_KEY=your-composio-cli-key
    [LLM_INTERFACE_PROVIDER_OF_YOUR_CHOICE]_API_KEY=your-api-key
    

Contributing

  1. Create your feature branch (git checkout -b feature/your-amazing-feature)

  2. Create amazing features

  3. Test the features using the tests directory.

    • Add a test function to respective components in the tests directory.
    • Add your LITELLM_API_KEY, OPENAI_API_KEY, COMPOSIO_API_KEY, DEFAULT_USER_ID to the Github repository secrets located at settings > secrets & variables > Actions.
    • Run a test.
      uv run pytest tests -vv --cache-clear
      

    pytest

    • When adding a new file to tests, name the file ended with _test.py.
    • When adding a new feature to the file, name the feature started with test_.
  4. Pull the latest version of source code from the main branch (git pull origin main) *Address conflicts if any.

  5. Commit your changes (git add . / git commit -m 'Add your-amazing-feature')

  6. Push to the branch (git push origin feature/your-amazing-feature)

  7. Open a pull request

Optional

  • Flag with #! REFINEME for any improvements needed and #! FIXME for any errors.
  • production use case is available at https://versi0n.io. Currently, we are running alpha test.

Documentation

  • To edit the documentation, see docs repository and edit the respective component.

  • We use mkdocs to update the docs. You can run the doc locally at http://127.0.0.1:8000/:

    uv run python3 -m mkdocs serve --clean
    

Customizing AI Agents

To add an agent, use sample directory to add new project. You can define an agent with a specific role, goal, and set of tools.

Your new agent needs to follow the Agent model defined in the verionhq.agent.model.py.

You can also add any fields and functions to the Agent model universally by modifying verionhq.agent.model.py.

Modifying RAG Functionality

The RAG system uses Chroma DB to store and query past campaign dataset. To update the knowledge base:

  1. Add new files to the uploads/ directory. (This will not be pushed to Github.)
  2. Modify the tools.py file to update the ingestion process if necessary.
  3. Run the ingestion process to update the Chroma DB.

Package Management with uv

  • Add a package: uv add <package>
  • Remove a package: uv remove <package>
  • Run a command in the virtual environment: uv run <command>
  • After updating dependencies, update requirements.txt accordingly or run uv pip freeze > requirements.txt

Pre-Commit Hooks

  1. Install pre-commit hooks:

    uv run pre-commit install
    
  2. Run pre-commit checks manually:

    uv run pre-commit run --all-files
    

Pre-commit hooks help maintain code quality by running checks for formatting, linting, and other issues before each commit.

  • To skip pre-commit hooks (NOT RECOMMENDED)
    git commit --no-verify -m "your-commit-message"
    

Trouble Shooting

Common issues and solutions:

  • API key errors: Ensure all API keys in the .env file are correct and up to date. Make sure to add load_dotenv() on the top of the python file to apply the latest environment values.
  • Database connection issues: Check if the Chroma DB is properly initialized and accessible.
  • Memory errors: If processing large contracts, you may need to increase the available memory for the Python process.
  • Issues related to the Python version: Docling/Pytorch is not ready for Python 3.13 as of Jan 2025. Use Python 3.12.x as default by running uv venv --python 3.12.8 and uv python pin 3.12.8.
  • Issues related to dependencies: rm -rf uv.lock, uv cache clean, uv venv, and run uv pip install -r requirements.txt -v.
  • Issues related to the AI agents or RAG system: Check the output.log file for detailed error messages and stack traces.
  • Issues related to Python quit unexpectedly: Check this stackoverflow article.
  • reportMissingImports error from pyright after installing the package: This might occur when installing new libraries while VSCode is running. Open the command pallete (ctrl + shift + p) and run the Python: Restart language server task.

Frequently Asked Questions (FAQ)

Q. Where can I see if the agent is working?

A. You can find a frontend app here with real-world outbound use cases. You can also test features here using React app.

Q. How do you analyze the customer?

A. We employ soft clustering for each customer.

Q. When should I use a team vs an agent?

A. In essence, use a team for intricate, evolving projects, and agents for quick, straightforward tasks.

Use a team when:

Complex tasks: You need to complete multiple, interconnected tasks that require sequential or hierarchical processing.

Iterative refinement: You want to iteratively improve upon the output through multiple rounds of feedback and revision.

Use an agent when:

Simple tasks: You have a straightforward, one-off task that doesn't require significant complexity or iteration.

Human input: You need to provide initial input or guidance to the agent, or you expect to review and refine the output.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

versionhq-1.1.12.4.tar.gz (493.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

versionhq-1.1.12.4-py3-none-any.whl (85.6 kB view details)

Uploaded Python 3

File details

Details for the file versionhq-1.1.12.4.tar.gz.

File metadata

  • Download URL: versionhq-1.1.12.4.tar.gz
  • Upload date:
  • Size: 493.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for versionhq-1.1.12.4.tar.gz
Algorithm Hash digest
SHA256 afa090daf664eda0dc433dc3bb5d0b2be71c3d3a0ce0c20a47a89e03471a00ae
MD5 c3c642319fe83afd643d024110ab757e
BLAKE2b-256 2dcb9e7c5e453f723f6b130e76467fdd9ba6f19d84b4bd2552b15822c3d0cbfb

See more details on using hashes here.

File details

Details for the file versionhq-1.1.12.4-py3-none-any.whl.

File metadata

  • Download URL: versionhq-1.1.12.4-py3-none-any.whl
  • Upload date:
  • Size: 85.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.8

File hashes

Hashes for versionhq-1.1.12.4-py3-none-any.whl
Algorithm Hash digest
SHA256 d5566360c9aa9f96fa2181f7a5693980fe2cd3fd289384e6dcff46d16d6f405d
MD5 95601117a9ed7e98f176ec698b827db9
BLAKE2b-256 ce703da0e9fd205fb25cec9f8b8a33b8ab332383707c438299b2c79cabba6e3b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page