File storage and sandbox backends for AI agents
Project description
File Storage & Sandbox Backends for Pydantic AI
Console Toolset, Docker Sandbox, and Permission System for AI Agents
Console Toolset — ls, read, write, edit, grep, execute • Docker Sandbox — isolated code execution • Permission System — fine-grained access control
File Storage & Sandbox Backends provides everything your Pydantic AI agent needs to work with files and execute code safely. Choose from in-memory, local filesystem, or Docker-isolated backends.
Full framework? Check out Pydantic Deep Agents — complete agent framework with planning, filesystem, subagents, and skills.
Use Cases
| What You Want to Build | How This Library Helps |
|---|---|
| AI Coding Assistant | Console toolset with file ops + code execution |
| Multi-User Web App | Docker sandboxes with session isolation |
| Code Review Bot | Read-only backend with grep/glob search |
| Secure Execution | Permission system blocks dangerous operations |
| Testing/CI | In-memory StateBackend for fast, isolated tests |
Installation
pip install pydantic-ai-backend
Or with uv:
uv add pydantic-ai-backend
Optional extras:
# Console toolset (requires pydantic-ai)
pip install pydantic-ai-backend[console]
# Docker sandbox support
pip install pydantic-ai-backend[docker]
# Everything
pip install pydantic-ai-backend[console,docker]
Quick Start — ConsoleCapability (Recommended)
The simplest way to give your agent filesystem tools:
from pydantic_ai import Agent
from pydantic_ai_backends import ConsoleCapability
agent = Agent("openai:gpt-4.1", capabilities=[ConsoleCapability()])
With Permissions
from pydantic_ai_backends import ConsoleCapability
from pydantic_ai_backends.permissions import READONLY_RULESET
# Read-only agent — write/edit/execute tools are hidden from the model
agent = Agent("openai:gpt-4.1", capabilities=[ConsoleCapability(permissions=READONLY_RULESET)])
Alternative: Toolset API
from dataclasses import dataclass
from pydantic_ai import Agent
from pydantic_ai_backends import LocalBackend, create_console_toolset
@dataclass
class Deps:
backend: LocalBackend
agent = Agent(
"openai:gpt-4.1",
deps_type=Deps,
toolsets=[create_console_toolset()],
)
backend = LocalBackend(root_dir="./workspace")
result = agent.run_sync(
"Create a Python script that calculates fibonacci and run it",
deps=Deps(backend=backend),
)
print(result.output)
That's it. Your agent can now:
- List files and directories (
ls) - Read and write files (
read_file,write_file) - Edit files with string replacement (
edit_file) - Search with glob patterns and regex (
glob,grep) - Execute shell commands (
execute)
Available Backends
| Backend | Storage | Execution | Use Case |
|---|---|---|---|
StateBackend |
In-memory | No | Testing, ephemeral sessions |
LocalBackend |
Filesystem | Yes | Local development, CLI tools |
DockerSandbox |
Container | Yes | Multi-user, untrusted code |
CompositeBackend |
Routed | Varies | Complex multi-source setups |
In-Memory (StateBackend)
from pydantic_ai_backends import StateBackend
backend = StateBackend()
# Files stored in memory, perfect for tests
Local Filesystem (LocalBackend)
from pydantic_ai_backends import LocalBackend
backend = LocalBackend(
root_dir="/workspace",
allowed_directories=["/workspace", "/shared"],
enable_execute=True,
)
Docker Sandbox (DockerSandbox)
from pydantic_ai_backends import DockerSandbox
sandbox = DockerSandbox(runtime="python-datascience")
sandbox.start()
# Fully isolated container environment
sandbox.stop()
Reusable Named Container
from pydantic_ai_backends import DockerSandbox
# Named container persists between sessions (packages survive restarts)
sandbox = DockerSandbox(
image="python:3.12-slim",
container_name="my-dev-env", # implies auto_remove=False
volumes={"/my/project": "/workspace"},
)
# Next time: finds existing container and reattaches
Console Toolset
Ready-to-use tools for pydantic-ai agents:
from pydantic_ai_backends import create_console_toolset
# All tools enabled
toolset = create_console_toolset()
# Without shell execution
toolset = create_console_toolset(include_execute=False)
# With approval requirements
toolset = create_console_toolset(
require_write_approval=True,
require_execute_approval=True,
)
# With custom tool descriptions
toolset = create_console_toolset(
descriptions={
"execute": "Run shell commands in the workspace",
"read_file": "Read file contents from the workspace",
}
)
Available tools: ls, read_file, write_file, edit_file, glob, grep, execute
Image Support
For multimodal models, enable image file handling:
toolset = create_console_toolset(image_support=True)
# Now read_file on .png/.jpg/.gif/.webp returns BinaryContent
# that multimodal models (GPT-4o, Claude, etc.) can see directly
Permission System
Fine-grained access control:
from pydantic_ai_backends import LocalBackend
from pydantic_ai_backends.permissions import DEFAULT_RULESET, READONLY_RULESET
# Safe defaults (allow reads, ask for writes)
backend = LocalBackend(root_dir="/workspace", permissions=DEFAULT_RULESET)
# Read-only mode
backend = LocalBackend(root_dir="/workspace", permissions=READONLY_RULESET)
| Preset | Description |
|---|---|
DEFAULT_RULESET |
Allow reads (except secrets), ask for writes/executes |
PERMISSIVE_RULESET |
Allow most operations, deny dangerous commands |
READONLY_RULESET |
Allow reads only, deny all writes and executes |
STRICT_RULESET |
Everything requires approval |
Docker Runtimes
Pre-configured environments:
| Runtime | Base Image | Packages |
|---|---|---|
python-minimal |
python:3.12-slim | (none) |
python-datascience |
python:3.12-slim | pandas, numpy, matplotlib, scikit-learn |
python-web |
python:3.12-slim | fastapi, uvicorn, sqlalchemy, httpx |
node-minimal |
node:20-slim | (none) |
node-react |
node:20-slim | typescript, vite, react |
Custom runtime:
from pydantic_ai_backends import DockerSandbox, RuntimeConfig
runtime = RuntimeConfig(
name="ml-env",
base_image="python:3.12-slim",
packages=["torch", "transformers"],
)
sandbox = DockerSandbox(runtime=runtime)
Session Manager
Multi-user web applications:
from pydantic_ai_backends import SessionManager
# Docker (default)
manager = SessionManager(
default_runtime="python-datascience",
workspace_root="/app/workspaces",
)
# Each user gets isolated sandbox
sandbox = await manager.get_or_create("user-123")
Custom Sandbox Factory
Use any sandbox backend (Daytona, custom, etc.):
from pydantic_ai_backends import SessionManager, DaytonaSandbox
def daytona_factory(session_id: str) -> DaytonaSandbox:
return DaytonaSandbox(sandbox_id=session_id)
manager = SessionManager(sandbox_factory=daytona_factory)
sandbox = await manager.get_or_create("user-123")
Why Choose This Library?
| Feature | Description |
|---|---|
| Multiple Backends | In-memory, filesystem, Docker — same interface |
| Console Toolset | Ready-to-use tools for pydantic-ai agents |
| Permission System | Pattern-based access control with presets |
| Docker Isolation | Safe execution of untrusted code |
| Session Management | Multi-user support with workspace persistence |
| Image Support | Multimodal models can see images via BinaryContent |
| Pre-built Runtimes | Python and Node.js environments ready to go |
Related Projects
| Package | Description |
|---|---|
| Pydantic Deep Agents | Full agent framework (uses this library) |
| pydantic-ai-todo | Task planning toolset |
| subagents-pydantic-ai | Multi-agent orchestration |
| summarization-pydantic-ai | Context management |
| pydantic-ai | The foundation — agent framework by Pydantic |
Contributing
git clone https://github.com/vstorm-co/pydantic-ai-backend.git
cd pydantic-ai-backend
make install
make test # 100% coverage required
License
MIT — see LICENSE
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydantic_ai_backend-0.2.4.tar.gz.
File metadata
- Download URL: pydantic_ai_backend-0.2.4.tar.gz
- Upload date:
- Size: 13.8 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
28ec37213b36fbf88a025b9ec1d9e4cfcc80e509c56f5533b74087343b0b9159
|
|
| MD5 |
70ced95f4599e5807bead1bdfdaf3244
|
|
| BLAKE2b-256 |
e2421f8ca9da6c699f8e7730ae59689a1000561d5ce0549845a957a450ebaf91
|
Provenance
The following attestation bundles were made for pydantic_ai_backend-0.2.4.tar.gz:
Publisher:
publish.yml on vstorm-co/pydantic-ai-backend
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pydantic_ai_backend-0.2.4.tar.gz -
Subject digest:
28ec37213b36fbf88a025b9ec1d9e4cfcc80e509c56f5533b74087343b0b9159 - Sigstore transparency entry: 1278468921
- Sigstore integration time:
-
Permalink:
vstorm-co/pydantic-ai-backend@55f99b1ce045f8bb1fab3bb59c809166a504bb12 -
Branch / Tag:
refs/tags/0.2.4 - Owner: https://github.com/vstorm-co
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@55f99b1ce045f8bb1fab3bb59c809166a504bb12 -
Trigger Event:
release
-
Statement type:
File details
Details for the file pydantic_ai_backend-0.2.4-py3-none-any.whl.
File metadata
- Download URL: pydantic_ai_backend-0.2.4-py3-none-any.whl
- Upload date:
- Size: 58.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b7fc53aad9fc2b3ab478fde10a06dbc826694c9848f9cb3488e06cc364e4ecd
|
|
| MD5 |
3528d0576ee6fca1345ee9aa837db1b8
|
|
| BLAKE2b-256 |
6cd08de251dcad48f678182cc9aaac0db43a8c2ef606d76dbec9d8be5aa6e19f
|
Provenance
The following attestation bundles were made for pydantic_ai_backend-0.2.4-py3-none-any.whl:
Publisher:
publish.yml on vstorm-co/pydantic-ai-backend
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
pydantic_ai_backend-0.2.4-py3-none-any.whl -
Subject digest:
0b7fc53aad9fc2b3ab478fde10a06dbc826694c9848f9cb3488e06cc364e4ecd - Sigstore transparency entry: 1278468935
- Sigstore integration time:
-
Permalink:
vstorm-co/pydantic-ai-backend@55f99b1ce045f8bb1fab3bb59c809166a504bb12 -
Branch / Tag:
refs/tags/0.2.4 - Owner: https://github.com/vstorm-co
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@55f99b1ce045f8bb1fab3bb59c809166a504bb12 -
Trigger Event:
release
-
Statement type: