Skip to main content

A lightweight, extensible Python framework for building stateful AI agents with tool use, backed by local or remote LLMs.

Project description

AllegroAgent logo

AllegroAgent

A lightweight, extensible Python framework for building stateful AI agents backed by local or remote LLMs. Agent is the single public entry point — configure it with a model string and a list of tools, and call run(). The framework handles conversation history, provider routing, and the tool-calling loop for you.


Features

  • Stateful agents with tool useAgent.run() manages conversation history and automatically executes tools the LLM requests, looping until it produces a final answer.
  • Resumable conversations — pass prior messages via Agent(history=...) to pick up where another session left off; export the current state any time with agent.history().
  • Pluggable providers — add new backends by implementing a single generate() method. Ollama ships in the box.
  • JSON Schema tools — subclass BaseTool, declare parameters, and the agent exposes them to any function-calling-capable model.
  • Typed errors — a small hierarchy of framework exceptions for clean error handling.
  • Zero heavy deps — just requests at runtime.

Architecture

┌─────────────────────────────────────────────┐
│  Agent            (public, stateful loop)   │
├─────────────────────────────────────────────┤
│  Providers        (Ollama, …)               │
└─────────────────────────────────────────────┘

Agent talks to providers through a small internal router (_Client) that parses the provider:model string and caches provider instances. You never need to touch it directly.

allegro_agent/
├── agent.py              # Agent — the public entry point
├── _client.py            # internal — provider router used by Agent
├── exceptions.py         # FrameworkError hierarchy
├── providers/
│   ├── base.py           # BaseProvider + ProviderResponse dataclass
│   ├── ollama.py         # OllamaProvider
│   └── __init__.py       # Provider registry (get/register_provider)
└── tools/
    ├── base.py           # BaseTool with JSON Schema + to_schema()
    └── file_write.py     # FileWriteTool (reference implementation)

Installation

Requires Python 3.10+.

Install from PyPI:

pip install allegro-agent

Or install from source (for development):

git clone https://github.com/ajithraghavan/AllegroAgent.git
cd AllegroAgent
pip install -e ".[dev]"

For the default Ollama provider, install and start Ollama locally:

ollama serve
ollama pull llama3

Quick Start

from allegro_agent import Agent, FileWriteTool

agent = Agent(
    name="Writer",
    model="ollama:llama3",
    temperature=0.1,
    system_prompt="You are a helpful writing assistant.",
    tools=[FileWriteTool()],
)

print(agent.run("Write a haiku about the ocean to ocean.txt"))
print(agent.run("Now write one about mountains to mountains.txt"))
agent.reset()  # clear history

What happens on each run():

  1. The agent sends the prompt + your tool schemas to the LLM.
  2. If the LLM returns tool_calls, the agent executes each tool and feeds the results back.
  3. Steps 1–2 repeat until the LLM returns a plain text answer (or MAX_TOOL_ROUNDS = 10).

Resuming a conversation

Agent keeps conversation state in memory only. To carry a chat across processes, requests, or storage backends of your choice, capture it with agent.history() and seed a new Agent with history=....

import json
from allegro_agent import Agent

# Session 1 — have a conversation, then capture state.
agent = Agent(model="ollama:llama3", system_prompt="You are helpful.")
agent.run("My name is John.")
agent.run("Remember that.")

snapshot = agent.history()                   # list[dict], JSON-serializable
json.dump(snapshot, open("conv.json", "w"))  # caller picks the storage

# Session 2 — possibly in a fresh process — resume.
prior = json.load(open("conv.json"))
agent = Agent(model="ollama:llama3", system_prompt="You are helpful.", history=prior)
print(agent.run("What is my name?"))         # → references "John"

Notes:

  • history() returns a deep copy, so mutating the returned list never affects the agent.
  • history is validated on construction. Each item must be a dict with role (one of "user", "assistant", "tool") and content (str). Malformed payloads raise InvalidHistoryError.
  • The "system" role is not stored in history — system prompts continue to flow through the existing system_prompt kwarg, which the caller re-supplies (or omits) when constructing the resumed agent.
  • The framework owns no I/O. Use JSON, a database, Redis, or anything else — it's just a list of plain dicts.

Configuration

Agent accepts the following keyword arguments (all keyword-only):

Argument Type Description
model str Required. Format: provider:model
name str Display name (default "Agent")
temperature float Sampling temperature
max_tokens int Max response tokens
system_prompt str System instruction prepended to every call
tools list[BaseTool] Tools the agent may invoke
history list[dict] Prior conversation messages to resume from

Extending the Framework

Adding a new provider

Subclass BaseProvider and register it:

from allegro_agent.providers.base import BaseProvider, ProviderResponse
from allegro_agent import register_provider

class OpenAIProvider(BaseProvider):
    def generate(self, messages, **kwargs) -> ProviderResponse:
        # call the API, translate the response
        return ProviderResponse(
            content="...",
            model=kwargs["model"],
            provider="openai",
            tool_calls=None,
        )

register_provider("openai", OpenAIProvider)
# now usable as model="openai:gpt-4"

Adding a new tool

Subclass BaseTool, define JSON Schema parameters, implement execute:

from allegro_agent import BaseTool

class AddTool(BaseTool):
    name = "add"
    description = "Add two integers."
    parameters = {
        "type": "object",
        "properties": {
            "a": {"type": "integer"},
            "b": {"type": "integer"},
        },
        "required": ["a", "b"],
    }

    def execute(self, **kwargs) -> str:
        return str(kwargs["a"] + kwargs["b"])

Pass an instance via the agent's tools list — the framework calls to_schema() to expose it to the LLM.


Exceptions

All framework errors inherit from FrameworkError:

  • ProviderError — provider-level failures (network, API errors)
  • ProviderNotFoundError — unknown provider name in the registry
  • InvalidModelFormatError — model string missing provider: prefix
  • ToolError — raised by tools on execution failure
  • InvalidHistoryErrorhistory argument to Agent is malformed

Testing

pytest

The reference end-to-end script in tests/test_agent.py exercises Agent against a live Ollama instance.


Project Metadata

  • Package: allegro-agent v0.3.0
  • Python: ≥ 3.10
  • Runtime deps: requests
  • License: see LICENSE

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

allegro_agent-0.3.0.tar.gz (24.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

allegro_agent-0.3.0-py3-none-any.whl (21.8 kB view details)

Uploaded Python 3

File details

Details for the file allegro_agent-0.3.0.tar.gz.

File metadata

  • Download URL: allegro_agent-0.3.0.tar.gz
  • Upload date:
  • Size: 24.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for allegro_agent-0.3.0.tar.gz
Algorithm Hash digest
SHA256 22e8a2eee21d0e0369443cdbd2db0c5bfe55f922530ea311acd3a6c900e7fdf1
MD5 dfd091169190cef38b7aeebdc3c8bb52
BLAKE2b-256 375bf9d94c47321d6c609f8d41262df5055f5590cbb0d4920a93b590f0580101

See more details on using hashes here.

File details

Details for the file allegro_agent-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: allegro_agent-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 21.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.0

File hashes

Hashes for allegro_agent-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6e1c95bd22591ffd79f60686d4afcce56b0cd3bb0ae4caa1a9bffb7ca250f90a
MD5 9d7d519205bcdaeb36ff306de0aaddd6
BLAKE2b-256 fcccab40927ee5de74d00daacbab11158e9ed11404f3ee10362828c387336b9a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page