Filesystem sandbox toolset for PydanticAI agents with LLM-friendly errors
Project description
pydantic-ai-filesystem-sandbox
Filesystem sandbox toolset for PydanticAI agents with LLM-friendly errors.
Why This Package?
When building LLM agents that interact with the filesystem, you need:
- Sandboxing - Restrict which directories the agent can access
- Read/Write Control - Fine-grained permissions per path
- LLM-Friendly Errors - Error messages that help the LLM correct its behavior
- Approval Integration - Works with human-in-the-loop approval flows
Architecture
- Sandbox - Security boundary for permission checking and path resolution
- FileSystemToolset - File I/O tools that use Sandbox
- ApprovalToolset - Optional wrapper for human-in-the-loop approval
Installation
pip install pydantic-ai-filesystem-sandbox
Quick Start
from pydantic_ai import Agent
from pydantic_ai_filesystem_sandbox import (
FileSystemToolset,
Sandbox,
SandboxConfig,
Mount,
)
# Configure sandbox with Docker-style mounts
config = SandboxConfig(mounts=[
Mount(host_path="./data/input", mount_point="/input", mode="ro"), # Read-only
Mount(host_path="./data/output", mount_point="/output", mode="rw"), # Read-write
])
# Create the sandbox (security boundary)
sandbox = Sandbox(config)
# Create the toolset (file I/O tools)
toolset = FileSystemToolset(sandbox)
# Use with PydanticAI agent
agent = Agent("openai:gpt-4", toolsets=[toolset])
The agent can now access files using virtual paths like /input/data.txt and /output/results.json.
Simple Usage
For simple cases, use the factory method:
from pydantic_ai_filesystem_sandbox import FileSystemToolset
# Single directory with read-write access
toolset = FileSystemToolset.create_default("./data", mode="rw")
agent = Agent("openai:gpt-4", toolsets=[toolset])
Configuration
See API Reference for complete details.
Mount Options
| Option | Type | Default | Description |
|---|---|---|---|
host_path |
Path | required | Host directory to mount |
mount_point |
str | required | Virtual path (e.g., "/docs", "/data") |
mode |
"ro" | "rw" | "ro" | Access mode |
suffixes |
list[str] | None | None | Allowed file extensions (None = all) |
max_file_bytes |
int | None | None | Maximum file size limit |
write_approval |
bool | True | Require approval for writes |
read_approval |
bool | False | Require approval for reads |
Example Configuration
from pydantic_ai_filesystem_sandbox import Sandbox, SandboxConfig, Mount
config = SandboxConfig(mounts=[
# Read-only input - any file type
Mount(
host_path="./data/input",
mount_point="/input",
mode="ro",
),
# Read-write output - only markdown and text
Mount(
host_path="./data/output",
mount_point="/output",
mode="rw",
suffixes=[".md", ".txt"],
max_file_bytes=1_000_000, # 1MB limit
),
# Config files - read-only, requires approval
Mount(
host_path="./config",
mount_point="/config",
mode="ro",
read_approval=True,
),
])
Single Root Mount
For projects where you want the entire directory as virtual /:
from pydantic_ai_filesystem_sandbox import Sandbox, SandboxConfig, Mount
config = SandboxConfig(mounts=[
Mount(host_path=".", mount_point="/", mode="rw"),
])
sandbox = Sandbox(config)
sandbox.resolve("/src/main.py") # -> <cwd>/src/main.py
Deriving child sandboxes
Use Sandbox.derive() to create a restricted child sandbox. By default the child has no access unless you explicitly allow paths.
parent = Sandbox(config)
# Empty child (secure by default)
child = parent.derive()
# Allow read-only access to a subtree
reader = parent.derive(allow_read="/output/reports")
# Allow read/write access to a subtree
writer = parent.derive(allow_write="/output/reports")
Available Tools
The toolset provides seven tools to the agent:
read_file
Read a text file from the sandbox.
Note: read_file currently reads the entire file into memory (even when using max_chars); use max_file_bytes to bound file size.
Path format: '/mount/path' (e.g., '/docs/file.txt')
Parameters:
- path: str (required)
- max_chars: int (default: 20,000)
- offset: int (default: 0)
write_file
Write a text file to the sandbox (requires mode="rw"). Parent directories are created automatically.
Path format: '/mount/path' (e.g., '/output/file.txt')
Parameters:
- path: str (required)
- content: str (required)
edit_file
Edit a file by replacing exact text (requires mode="rw").
Note: edit_file reads the entire file into memory; use max_file_bytes to bound file size.
Path format: '/mount/path'
Parameters:
- path: str (required)
- old_text: str (required) - must match exactly and be unique
- new_text: str (required)
delete_file
Delete a file from the sandbox (requires mode="rw").
Path format: '/mount/path'
Parameters:
- path: str (required)
move_file
Move or rename a file within the sandbox (requires mode="rw" for both source and destination). Parent directories are created automatically.
Path format: '/mount/path'
Parameters:
- source: str (required)
- destination: str (required)
copy_file
Copy a file within the sandbox. Source can be read-only, destination requires mode="rw". Parent directories are created automatically.
Path format: '/mount/path'
Parameters:
- source: str (required)
- destination: str (required)
list_files
List files matching a glob pattern.
Parameters:
- path: str (default: "/" for all mounts)
- pattern: str (default: "**/*")
LLM-Friendly Errors
All errors include guidance on what IS allowed:
# PathNotInSandboxError
"Cannot access '/secret/file.txt': path is outside sandbox.
Readable paths: /input, /output"
# PathNotWritableError
"Cannot write to '/input/file.txt': path is read-only.
Writable paths: /output"
# SuffixNotAllowedError
"Cannot access '/output/data.json': suffix '.json' not allowed.
Allowed suffixes: .md, .txt"
# FileTooLargeError
"Cannot read '/output/huge.txt': file too large (5,000,000 bytes).
Maximum allowed: 1,000,000 bytes"
# EditError
"Cannot edit '/output/file.txt': text not found in file.
Searched for: 'old text...'"
Approval Integration
Works with pydantic-ai-blocking-approval for human-in-the-loop:
from pydantic_ai_filesystem_sandbox import (
ApprovableFileSystemToolset, Sandbox, SandboxConfig, Mount
)
from pydantic_ai_blocking_approval import ApprovalToolset, ApprovalController
# Create sandbox and toolset
config = SandboxConfig(mounts=[
Mount(host_path="./output", mount_point="/output", mode="rw", write_approval=True),
])
sandbox = Sandbox(config)
toolset = ApprovableFileSystemToolset(sandbox)
# Wrap with approval
controller = ApprovalController(mode="interactive")
approved_toolset = ApprovalToolset(
inner=toolset,
approval_callback=controller.approval_callback,
memory=controller.memory,
)
agent = Agent(..., toolsets=[approved_toolset])
ApprovableFileSystemToolset extends FileSystemToolset with needs_approval() (returns ApprovalResult) and get_approval_description() for the approval UI.
Using the Sandbox Directly
The Sandbox class can be used independently for permission checking:
sandbox = Sandbox(config)
# Check permissions
if sandbox.can_write("/output/file.txt"):
resolved = sandbox.resolve("/output/file.txt")
# ... perform operation
# Query boundaries
print(sandbox.readable_roots) # ["/input", "/output"]
print(sandbox.writable_roots) # ["/output"]
# Check approval requirements
sandbox.needs_write_approval("/output/file.txt") # True/False
ReadResult
The read_file tool returns a ReadResult object:
class ReadResult(BaseModel):
content: str # The file content read
truncated: bool # True if more content exists
total_chars: int # Total file size in characters
offset: int # Starting position used
chars_read: int # Characters actually returned
This allows agents to handle large files by reading in chunks:
# First read
result = toolset.read("/input/large.txt", max_chars=10000)
if result.truncated:
# Continue reading
result2 = toolset.read("/input/large.txt", max_chars=10000, offset=10000)
API Reference
See docs/api.md for full API documentation.
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydantic_ai_filesystem_sandbox-0.9.0.tar.gz.
File metadata
- Download URL: pydantic_ai_filesystem_sandbox-0.9.0.tar.gz
- Upload date:
- Size: 205.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.21
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d8d81f01d5c9330e4df24302055c989427e349634d553912dd16e9b45258b69e
|
|
| MD5 |
8210dd03a7b2475ae08da0bc873b760d
|
|
| BLAKE2b-256 |
5fcf6d044f1baed81f36a0e42580163432841351b96b4cd8309246ea66236ad8
|
File details
Details for the file pydantic_ai_filesystem_sandbox-0.9.0-py3-none-any.whl.
File metadata
- Download URL: pydantic_ai_filesystem_sandbox-0.9.0-py3-none-any.whl
- Upload date:
- Size: 20.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.21
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
285a86d06b07258b4e44f746759cb279c976643c17995047067182404fa41748
|
|
| MD5 |
8a7618133499378438603548b29beb98
|
|
| BLAKE2b-256 |
7942cddd0538e526aa209a3cda5342683f1114de11f458276ab92ea9a0f0feb7
|