File storage and sandbox backends for AI agents
Project description
pydantic-ai-backend
File storage, sandbox backends, and console toolset for pydantic-ai agents.
Looking for a complete agent framework? Check out pydantic-deep - a full-featured deep agent framework with planning, subagents, and skills system.
Need task planning? Check out pydantic-ai-todo - standalone task planning toolset for any pydantic-ai agent.
Documentation
Full Documentation - Installation, concepts, examples, and API reference.
Architecture
Installation
# Core library
pip install pydantic-ai-backend
# With console toolset (requires pydantic-ai)
pip install pydantic-ai-backend[console]
# With Docker sandbox support
pip install pydantic-ai-backend[docker]
# Everything
pip install pydantic-ai-backend[console,docker]
Quick Start
Console Toolset with pydantic-ai
The easiest way to add file operations to your agent:
from dataclasses import dataclass
from pydantic_ai import Agent
from pydantic_ai_backends import LocalBackend, create_console_toolset
@dataclass
class Deps:
backend: LocalBackend
# Create agent with console tools
agent = Agent(
"openai:gpt-4o",
deps_type=Deps,
toolsets=[create_console_toolset()],
system_prompt="You are a coding assistant. Use the tools to read, write, and execute code.",
)
# Run agent
backend = LocalBackend(root_dir="./workspace")
result = agent.run_sync(
"Create a Python script that calculates fibonacci numbers and run it",
deps=Deps(backend=backend),
)
print(result.output)
Docker Sandbox for Safe Execution
For untrusted code or multi-user applications:
from dataclasses import dataclass
from pydantic_ai import Agent
from pydantic_ai_backends import DockerSandbox, create_console_toolset
@dataclass
class Deps:
backend: DockerSandbox
# Create isolated sandbox
sandbox = DockerSandbox(runtime="python-datascience")
sandbox.start()
# Create agent
agent = Agent(
"openai:gpt-4o",
deps_type=Deps,
toolsets=[create_console_toolset()],
system_prompt="You are a data analysis assistant.",
)
result = agent.run_sync(
"Write a script that generates random data and plots a histogram",
deps=Deps(backend=sandbox),
)
sandbox.stop()
In-Memory Backend for Testing
Perfect for unit tests and ephemeral sessions:
from dataclasses import dataclass
from pydantic_ai import Agent
from pydantic_ai_backends import StateBackend, create_console_toolset
@dataclass
class Deps:
backend: StateBackend
# In-memory storage - no files on disk
backend = StateBackend()
agent = Agent(
"openai:gpt-4o",
deps_type=Deps,
toolsets=[create_console_toolset(include_execute=False)], # No shell in StateBackend
)
result = agent.run_sync(
"Create a config.json file with database settings",
deps=Deps(backend=backend),
)
# Access files programmatically
print(backend.files) # {'/config.json': {...}}
Multi-User Web Application
Session manager for web apps with isolated user environments:
from dataclasses import dataclass
from fastapi import FastAPI
from pydantic_ai import Agent
from pydantic_ai_backends import SessionManager, DockerSandbox, create_console_toolset
app = FastAPI()
# Session manager with persistent storage
manager = SessionManager(
default_runtime="python-datascience",
workspace_root="/app/user_workspaces", # Files persist here
)
@dataclass
class Deps:
backend: DockerSandbox
agent = Agent(
"openai:gpt-4o",
deps_type=Deps,
toolsets=[create_console_toolset()],
)
@app.post("/run/{user_id}")
async def run_code(user_id: str, prompt: str):
# Each user gets their own isolated sandbox
sandbox = await manager.get_or_create(user_id)
result = await agent.run(prompt, deps=Deps(backend=sandbox))
return {"output": result.output}
@app.on_event("shutdown")
async def shutdown():
await manager.shutdown()
Composite Backend for Complex Routing
Route different paths to different backends:
from dataclasses import dataclass
from pydantic_ai import Agent
from pydantic_ai_backends import (
CompositeBackend,
LocalBackend,
StateBackend,
create_console_toolset,
)
@dataclass
class Deps:
backend: CompositeBackend
# Different backends for different purposes
backend = CompositeBackend(
default=StateBackend(), # Temp files in memory
routes={
"/project/": LocalBackend("/home/user/myproject"),
"/data/": LocalBackend("/shared/datasets", enable_execute=False),
},
)
agent = Agent(
"openai:gpt-4o",
deps_type=Deps,
toolsets=[create_console_toolset()],
)
result = agent.run_sync(
"Read the CSV from /data/sales.csv, analyze it, and save results to /project/report.md",
deps=Deps(backend=backend),
)
Console Toolset Configuration
from pydantic_ai_backends import create_console_toolset
# Default: all tools, execute requires approval
toolset = create_console_toolset()
# Without shell execution
toolset = create_console_toolset(include_execute=False)
# Require approval for write operations
toolset = create_console_toolset(
require_write_approval=True,
require_execute_approval=True,
)
# Custom toolset ID
toolset = create_console_toolset(id="file-tools")
# Include hidden files by default when using grep
toolset = create_console_toolset(default_ignore_hidden=False)
Available tools: ls, read_file, write_file, edit_file, glob, grep, execute
Permission System
Fine-grained access control for file operations and shell commands:
from pydantic_ai_backends import LocalBackend, create_console_toolset
from pydantic_ai_backends.permissions import (
DEFAULT_RULESET,
READONLY_RULESET,
PermissionRuleset,
OperationPermissions,
PermissionRule,
)
# Use pre-configured presets
backend = LocalBackend(root_dir="/workspace", permissions=DEFAULT_RULESET)
# Read-only mode
backend = LocalBackend(root_dir="/workspace", permissions=READONLY_RULESET)
# Custom rules
custom_permissions = PermissionRuleset(
default="ask",
read=OperationPermissions(
default="allow",
rules=[
PermissionRule(pattern="**/.env*", action="deny"),
PermissionRule(pattern="**/secrets/**", action="deny"),
],
),
write=OperationPermissions(
default="ask",
rules=[
PermissionRule(pattern="**/*.py", action="allow"),
],
),
execute=OperationPermissions(
default="deny",
rules=[
PermissionRule(pattern="git *", action="allow"),
PermissionRule(pattern="python *", action="allow"),
],
),
)
backend = LocalBackend(root_dir="/workspace", permissions=custom_permissions)
# With console toolset
toolset = create_console_toolset(permissions=DEFAULT_RULESET)
Available presets:
| Preset | Description |
|---|---|
DEFAULT_RULESET |
Allow reads (except secrets), ask for writes/executes |
PERMISSIVE_RULESET |
Allow most operations, deny dangerous commands |
READONLY_RULESET |
Allow reads only, deny all writes and executes |
STRICT_RULESET |
Everything requires approval |
Built-in Docker Runtimes
from pydantic_ai_backends import DockerSandbox
# Pre-configured environments
sandbox = DockerSandbox(runtime="python-datascience")
| Runtime | Base Image | Packages |
|---|---|---|
python-minimal |
python:3.12-slim | (none) |
python-datascience |
python:3.12-slim | pandas, numpy, matplotlib, scikit-learn, seaborn |
python-web |
python:3.12-slim | fastapi, uvicorn, sqlalchemy, httpx |
node-minimal |
node:20-slim | (none) |
node-react |
node:20-slim | typescript, vite, react, react-dom |
Custom runtime:
from pydantic_ai_backends import DockerSandbox, RuntimeConfig
runtime = RuntimeConfig(
name="ml-env",
base_image="python:3.12-slim",
packages=["torch", "transformers", "accelerate"],
env_vars={"PYTHONUNBUFFERED": "1"},
)
sandbox = DockerSandbox(runtime=runtime)
Backend Protocol
All backends implement BackendProtocol:
class BackendProtocol(Protocol):
def ls_info(self, path: str) -> list[FileInfo]: ...
def read(self, path: str, offset: int = 0, limit: int = 2000) -> str: ...
def write(self, path: str, content: str | bytes) -> WriteResult: ...
def edit(self, path: str, old: str, new: str, replace_all: bool = False) -> EditResult: ...
def glob_info(self, pattern: str, path: str = "/") -> list[FileInfo]: ...
def grep_raw(
self,
pattern: str,
path: str | None = None,
glob: str | None = None,
ignore_hidden: bool = True,
) -> list[GrepMatch] | str: ...
LocalBackend and DockerSandbox also provide shell execution:
def execute(self, command: str, timeout: int | None = None) -> ExecuteResponse: ...
Examples
Full working examples in examples/:
| Example | Description | Backend |
|---|---|---|
| local_cli | CLI coding assistant | LocalBackend |
| web_production | Multi-user web app with UI | DockerSandbox + SessionManager |
Development
git clone https://github.com/vstorm-co/pydantic-ai-backend.git
cd pydantic-ai-backend
make install
make test
Related Projects
- pydantic-ai - Agent framework by Pydantic
- pydantic-deep - Full agent framework (uses this library)
- pydantic-ai-todo - Task planning toolset
License
MIT License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydantic_ai_backend-0.1.2.tar.gz.
File metadata
- Download URL: pydantic_ai_backend-0.1.2.tar.gz
- Upload date:
- Size: 6.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13bcc5ad5de86511972cc665dff394592e5f197b15211fcc13fabcc41903a082
|
|
| MD5 |
4865377cea7f5d1963ad96a0c1e1b6b2
|
|
| BLAKE2b-256 |
27c924c63d414e26a8404a8e25f1c51ccf7019a64f9c09e23481cdceee51d334
|
Provenance
The following attestation bundles were made for pydantic_ai_backend-0.1.2.tar.gz:
Publisher:
publish.yml on vstorm-co/pydantic-ai-backend
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pydantic_ai_backend-0.1.2.tar.gz -
Subject digest:
13bcc5ad5de86511972cc665dff394592e5f197b15211fcc13fabcc41903a082 - Sigstore transparency entry: 843981211
- Sigstore integration time:
-
Permalink:
vstorm-co/pydantic-ai-backend@171d69c1e873db27c8fdc2438533c85f885cc7a6 -
Branch / Tag:
refs/tags/0.1.2 - Owner: https://github.com/vstorm-co
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@171d69c1e873db27c8fdc2438533c85f885cc7a6 -
Trigger Event:
release
-
Statement type:
File details
Details for the file pydantic_ai_backend-0.1.2-py3-none-any.whl.
File metadata
- Download URL: pydantic_ai_backend-0.1.2-py3-none-any.whl
- Upload date:
- Size: 43.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f4c590b336a52d8dbf9d7ae711b1c2e2c19ee0e1eb5843f44b38137d2e0f6ccc
|
|
| MD5 |
806fc531241e642811a21ad786a76a68
|
|
| BLAKE2b-256 |
f0dea56dcfebfec1a77d3fe408ef2bea1e06f9970b1dee374cb79b4f77ad6d7b
|
Provenance
The following attestation bundles were made for pydantic_ai_backend-0.1.2-py3-none-any.whl:
Publisher:
publish.yml on vstorm-co/pydantic-ai-backend
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pydantic_ai_backend-0.1.2-py3-none-any.whl -
Subject digest:
f4c590b336a52d8dbf9d7ae711b1c2e2c19ee0e1eb5843f44b38137d2e0f6ccc - Sigstore transparency entry: 843981215
- Sigstore integration time:
-
Permalink:
vstorm-co/pydantic-ai-backend@171d69c1e873db27c8fdc2438533c85f885cc7a6 -
Branch / Tag:
refs/tags/0.1.2 - Owner: https://github.com/vstorm-co
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@171d69c1e873db27c8fdc2438533c85f885cc7a6 -
Trigger Event:
release
-
Statement type: