Skip to main content

A Pydantic AI agent that remembers across sessions, can create its own tools, and delegates to specialists

Project description

Long Running Agents

A Pydantic AI agent that remembers across sessions, can create its own tools, and delegates to specialists. Memory, sandbox, and tasks in one stack.

Source

Why this exists

Problem: Most agents forget between runs. They can't recall past conversations, and they can't extend their own tool set.

Solution: Long Running Agents gives you persistent memory (SQLite + ChromaDB), cross-session recall, and dynamic tool generation—so your agent remembers, learns, and adapts over time.

Audience: Developers building long-lived, memory-aware agents with Pydantic AI.

What makes it different

Feature What it does Why it stands out
Cross-session memory get_recent_conversations(all_sessions=True) + search_memory Most agents forget between runs. This one recalls past turns across sessions.
Dynamic tool generation Agent creates new tools at runtime via generate_tool Extends its own tool set with AST validation and persistence.
Subagent delegation Code and research specialists Routes work to focused subagents instead of one monolithic agent.
Hybrid retrieval Vector + keyword module Optional hybrid search (semantic + keyword) available in the memory layer.
Pydantic AI Typed agent framework Uses Pydantic AI instead of LangChain/LlamaIndex.
Sandbox + memory + tasks All in one stack Memory, code execution, and task tracking in a single package.

What it is (and isn't)

  • Is: A chat-driven AI agent with persistent memory, code execution, and task tracking. You interact via a terminal loop; the agent responds using its tools; context is saved across runs.
  • Isn't: A workflow automation engine. No schedules, triggers, or DAGs. It's an interactive assistant, not Zapier or Airflow.

Install

Prerequisites: Python 3.10+, OpenAI API key

# From PyPI
pip install long-running-agents

# From GitHub
pip install git+https://github.com/prith27/lra.git

# From source (clone first)
git clone https://github.com/prith27/lra.git
cd lra
pip install -e .

Quick start

export OPENAI_API_KEY=sk-your-key-here   # or use lra init to create .env
lra chat

Or run lra init first to create a .env file and see setup instructions.

Note: Basic chat and memory work without the sandbox. For code execution or dynamic tool creation, start the sandbox first in a separate terminal (see Sandbox).

How it works

  1. Run lra chat to start the chat loop.
  2. Type a message; the agent may call tools (search memory, run code, create tasks, delegate to subagents).
  3. Each turn is persisted to SQLite and ChromaDB so the agent can recall past context in future runs.
  4. Each run gets a new session ID, but get_recent_conversations(all_sessions=True) and search_memory allow cross-session recall.

Agent tools

Tool Purpose
search_memory Semantic search over past summaries and facts (ChromaDB)
write_memory Store facts or summaries for later recall
get_recent_conversations Fetch recent turns (session or all sessions)
create_task, update_task_status, list_tasks Track multi-step work
create_sandbox, execute_code Run Python in isolated containers
delegate_code_task, delegate_research_task Hand off to specialist subagents
generate_tool Create new tools at runtime when no existing tool fits

Library usage

import asyncio
from long_running_agents import run_agent, AgentDeps, StructuredMemoryStore, VectorMemoryStore
from config import SANDBOX_URL

async def main():
    structured = StructuredMemoryStore()
    vector = VectorMemoryStore()
    await structured.init_db()

    deps = AgentDeps(
        session_id="my-session",
        structured_store=structured,
        vector_store=vector,
        sandbox_base_url=SANDBOX_URL,
    )

    output, messages = await run_agent("What can you do?", deps)
    print(output)
    await structured.close()

asyncio.run(main())

CLI

lra init              # Create .env and show setup instructions
lra chat              # Start the agent chat loop
lra list-tools        # List static and dynamic tools
lra inspect-tool X    # Inspect a dynamic tool
lra list-memory -s SESSION  # List memory for a session

Framework commands

Create and run custom agents with their own system prompts:

lra create-agent [name]     # Create agent dir (default: my_agent). Use --prompt or enter interactively
lra run [path]              # Run a custom agent (path to agent dir or main.py)
lra list-agents             # List agent directories
lra config                 # Show config

Create and manage tools:

lra create-tool             # Create a tool interactively
lra create-tool --file X    # Create a tool from a Python file
lra export-tools [-o path]  # Export dynamic tools to static file
lra validate-tool FILE      # Validate a tool file in sandbox

Note: my_agent/ and *_agent/ are in .gitignore by default so user-created agents are not committed. Add your own pattern to .gitignore if you want to ignore different agent dirs.

Configuration

Variable Description Default
OPENAI_API_KEY OpenAI API key (required)
SANDBOX_URL Sandbox API base URL http://localhost:8000
DATABASE_URL SQLAlchemy async URL sqlite+aiosqlite:///./data/agent_memory.db
VECTOR_STORE_PATH ChromaDB path ./data/chroma_db

Sandbox (optional)

The sandbox enables code execution and tool validation. It includes requests and httpx for HTTP-fetching tools. Basic chat and memory work without it.

Prerequisites: Docker must be installed and running. On macOS, open Docker Desktop and wait until it's ready before starting the sandbox. The sandbox spawns isolated containers for code execution.

Run order

  1. Start Docker (e.g. open Docker Desktop on Mac).
  2. Start the sandbox in one terminal.
  3. Run the agent in another terminal.
# Terminal 1: Ensure Docker is running, then start sandbox (keep running)
python -m uvicorn sandbox.server:app --reload --port 8000

# Terminal 2: Run agent
lra chat

Options

Option Command When to use
A: Local python -m uvicorn sandbox.server:app --reload --port 8000 Development; run from project root with deps installed
B: Docker Compose docker compose up sandbox Fully containerized; no local Python needed for sandbox

First run

On first start, the kernel image (longrunningagents-kernel:latest) is built automatically. This may take a minute.

Rebuild kernel

If you updated sandbox/Dockerfile (e.g. added packages), rebuild the kernel:

docker rmi longrunningagents-kernel:latest
# Then restart the sandbox

Project structure

├── agents/              # Main agent and subagents
├── tools/               # Memory, sandbox, task tools
├── sandbox/              # Sandbox API and kernel
├── memory/               # Structured and vector stores
├── schemas/              # Pydantic models
├── long_running_agents/  # Package exports
├── cli.py                # CLI entry point
├── config.py
├── main.py
└── pyproject.toml

Development

pip install -e ".[dev]"
pytest tests/ -v
mypy agents tools memory schemas

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

long_running_agents-0.1.2.tar.gz (32.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

long_running_agents-0.1.2-py3-none-any.whl (38.0 kB view details)

Uploaded Python 3

File details

Details for the file long_running_agents-0.1.2.tar.gz.

File metadata

  • Download URL: long_running_agents-0.1.2.tar.gz
  • Upload date:
  • Size: 32.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.9

File hashes

Hashes for long_running_agents-0.1.2.tar.gz
Algorithm Hash digest
SHA256 560fa3a1c1d82bcf9ddda3e4bda207e4f1b71065f016cae24d2bc727f701366c
MD5 492aa91e6ce1f2e956473490cd025223
BLAKE2b-256 22229b4456b705467db3a86f0b88825ea4d3ca1255349fda63b81a848a932cc3

See more details on using hashes here.

File details

Details for the file long_running_agents-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for long_running_agents-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c5f80029d94d590d182fc02c81e579409dd15b79c454f009e277c651d878e242
MD5 4c3c2ea597ca9ef0ac493f96f8bfb335
BLAKE2b-256 525f52a734ec092076e36bce833031805cc959b9a4a9199f83fc6e14ca7d9c51

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page