A Pydantic AI agent that remembers across sessions, can create its own tools, and delegates to specialists
Project description
Long Running Agents
A Pydantic AI agent that remembers across sessions, can create its own tools, and delegates to specialists. Memory, sandbox, and tasks in one stack.
Why this exists
Problem: Most agents forget between runs. They can't recall past conversations, and they can't extend their own tool set.
Solution: Long Running Agents gives you persistent memory (SQLite + ChromaDB), cross-session recall, and dynamic tool generation—so your agent remembers, learns, and adapts over time.
Audience: Developers building long-lived, memory-aware agents with Pydantic AI.
What makes it different
| Feature | What it does | Why it stands out |
|---|---|---|
| Cross-session memory | get_recent_conversations(all_sessions=True) + search_memory |
Most agents forget between runs. This one recalls past turns across sessions. |
| Dynamic tool generation | Agent creates new tools at runtime via generate_tool |
Extends its own tool set with AST validation and persistence. |
| Subagent delegation | Code and research specialists | Routes work to focused subagents instead of one monolithic agent. |
| Hybrid retrieval | Vector + keyword module | Optional hybrid search (semantic + keyword) available in the memory layer. |
| Pydantic AI | Typed agent framework | Uses Pydantic AI instead of LangChain/LlamaIndex. |
| Sandbox + memory + tasks | All in one stack | Memory, code execution, and task tracking in a single package. |
What it is (and isn't)
- Is: A chat-driven AI agent with persistent memory, code execution, and task tracking. You interact via a terminal loop; the agent responds using its tools; context is saved across runs.
- Isn't: A workflow automation engine. No schedules, triggers, or DAGs. It's an interactive assistant, not Zapier or Airflow.
Install
Prerequisites: Python 3.10+, OpenAI API key
# From PyPI
pip install long-running-agents
# From GitHub
pip install git+https://github.com/prith27/lra.git
# From source (clone first)
git clone https://github.com/prith27/lra.git
cd lra
pip install -e .
Setup
Set your OpenAI API key (required). Choose one:
# Option A: Export in shell
export OPENAI_API_KEY=sk-your-key-here
# Option B: Create .env file in your project directory
echo "OPENAI_API_KEY=sk-your-key-here" > .env
# Option C: Use lra init to create .env template, then add your key
lra init
Get a key at platform.openai.com/api-keys.
Quick start
lra chat
Note: Basic chat and memory work without the sandbox. For code execution or dynamic tool creation, start the sandbox first in a separate terminal (see Sandbox).
How it works
- Run
lra chatto start the chat loop. - Type a message; the agent may call tools (search memory, run code, create tasks, delegate to subagents).
- Each turn is persisted to SQLite and ChromaDB so the agent can recall past context in future runs.
- Each run gets a new session ID, but
get_recent_conversations(all_sessions=True)andsearch_memoryallow cross-session recall.
Agent tools
| Tool | Purpose |
|---|---|
search_memory |
Semantic search over past summaries and facts (ChromaDB) |
write_memory |
Store facts or summaries for later recall |
get_recent_conversations |
Fetch recent turns (session or all sessions) |
create_task, update_task_status, list_tasks |
Track multi-step work |
create_sandbox, execute_code |
Run Python in isolated containers |
delegate_code_task, delegate_research_task |
Hand off to specialist subagents |
generate_tool |
Create new tools at runtime when no existing tool fits |
Library usage
import asyncio
from long_running_agents import run_agent, AgentDeps, StructuredMemoryStore, VectorMemoryStore
from config import SANDBOX_URL
async def main():
structured = StructuredMemoryStore()
vector = VectorMemoryStore()
await structured.init_db()
deps = AgentDeps(
session_id="my-session",
structured_store=structured,
vector_store=vector,
sandbox_base_url=SANDBOX_URL,
)
output, messages = await run_agent("What can you do?", deps)
print(output)
await structured.close()
asyncio.run(main())
Examples
Examples are included in the package. After installing:
python -m long_running_agents.examples.01_basic_chat
python -m long_running_agents.examples.02_single_turn "Your question here"
| Example | Description |
|---|---|
01_basic_chat |
Chat loop: multiple turns, memory persists across runs |
02_single_turn |
One-off query: ask a question, get a response, exit |
When installed from source, see long_running_agents/examples/README.md for details.
CLI
lra init # Create .env and show setup instructions
lra chat # Start the agent chat loop
lra list-tools # List static and dynamic tools
lra inspect-tool X # Inspect a dynamic tool
lra list-memory -s SESSION # List memory for a session
Framework commands
Create and run custom agents with their own system prompts:
lra create-agent [name] # Create agent dir (default: my_agent). Use --prompt or enter interactively
lra run [path] # Run a custom agent (path to agent dir or main.py)
lra list-agents # List agent directories
lra config # Show config
Create and manage tools:
lra create-tool # Create a tool interactively
lra create-tool --file X # Create a tool from a Python file
lra export-tools [-o path] # Export dynamic tools to static file
lra validate-tool FILE # Validate a tool file in sandbox
Note: my_agent/ and *_agent/ are in .gitignore by default so user-created agents are not committed. Add your own pattern to .gitignore if you want to ignore different agent dirs.
Configuration
| Variable | Description | Default |
|---|---|---|
| OPENAI_API_KEY | OpenAI API key | (required) |
| SANDBOX_URL | Sandbox API base URL | http://localhost:8000 |
| DATABASE_URL | SQLAlchemy async URL | sqlite+aiosqlite:///./data/agent_memory.db |
| VECTOR_STORE_PATH | ChromaDB path | ./data/chroma_db |
Sandbox (optional)
The sandbox enables code execution and tool validation. It includes requests and httpx for HTTP-fetching tools. Basic chat and memory work without it.
Prerequisites: Docker must be installed and running. On macOS, open Docker Desktop and wait until it's ready before starting the sandbox. The sandbox spawns isolated containers for code execution.
Run order
- Start Docker (e.g. open Docker Desktop on Mac).
- Start the sandbox in one terminal.
- Run the agent in another terminal.
# Terminal 1: Ensure Docker is running, then start sandbox (keep running)
python -m uvicorn sandbox.server:app --reload --port 8000
# Terminal 2: Run agent
lra chat
Options
| Option | Command | When to use |
|---|---|---|
| A: Local | python -m uvicorn sandbox.server:app --reload --port 8000 |
Development; run from project root with deps installed |
| B: Docker Compose | docker compose up sandbox |
Fully containerized; no local Python needed for sandbox |
First run
On first start, the kernel image (longrunningagents-kernel:latest) is built automatically. This may take a minute.
Rebuild kernel
If you updated sandbox/Dockerfile (e.g. added packages), rebuild the kernel:
docker rmi longrunningagents-kernel:latest
# Then restart the sandbox
Project structure
├── agents/ # Main agent and subagents
├── tools/ # Memory, sandbox, task tools
├── sandbox/ # Sandbox API and kernel
├── memory/ # Structured and vector stores
├── schemas/ # Pydantic models
├── long_running_agents/ # Package exports
├── long_running_agents/examples/ # Example recipes (shipped with package)
├── cli.py # CLI entry point
├── config.py
├── main.py
└── pyproject.toml
Development
pip install -e ".[dev]"
pytest tests/ -v
mypy agents tools memory schemas
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file long_running_agents-0.1.3.tar.gz.
File metadata
- Download URL: long_running_agents-0.1.3.tar.gz
- Upload date:
- Size: 33.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0aec3819f06613a2db632afe5ea4a9a7591901e21e6d9564030122eafd1ece19
|
|
| MD5 |
2b623cd0005ca352c46cc0a7fcfaf4c1
|
|
| BLAKE2b-256 |
56bb7a33448a09a4eaf4d14e75588b04074b97aee4afde64c5e2781f8da9f9a1
|
File details
Details for the file long_running_agents-0.1.3-py3-none-any.whl.
File metadata
- Download URL: long_running_agents-0.1.3-py3-none-any.whl
- Upload date:
- Size: 40.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e6aa51fa3fd57813c4c2c5fa9bd381bcbdea96e1dd32a67a097ef3e62142fb4
|
|
| MD5 |
80e0382cf4850090b3da099626461d17
|
|
| BLAKE2b-256 |
c11b53ba847d7509b7e420b51a24b6236a3195894dd219efd4696912e5f2a486
|