Filesystem sandbox toolset for PydanticAI agents with LLM-friendly errors
Project description
pydantic-ai-filesystem-sandbox
Filesystem sandbox toolset for PydanticAI agents with LLM-friendly errors.
Why This Package?
When building LLM agents that interact with the filesystem, you need:
- Sandboxing - Restrict which directories the agent can access
- Read/Write Control - Fine-grained permissions per path
- LLM-Friendly Errors - Error messages that help the LLM correct its behavior
- Approval Integration - Works with human-in-the-loop approval flows
This package provides a FileSandboxImpl toolset that implements all of these as a PydanticAI AbstractToolset.
Installation
pip install pydantic-ai-filesystem-sandbox
Quick Start
from pydantic_ai import Agent
from pydantic_ai_filesystem_sandbox import (
FileSandboxImpl,
FileSandboxConfig,
PathConfig,
)
# Configure sandbox paths
config = FileSandboxConfig(paths={
"input": PathConfig(root="./data/input", mode="ro"), # Read-only
"output": PathConfig(root="./data/output", mode="rw"), # Read-write
})
# Create the sandbox toolset
sandbox = FileSandboxImpl(config)
# Use with PydanticAI agent
agent = Agent("openai:gpt-4", toolsets=[sandbox])
Configuration
PathConfig Options
| Option | Type | Default | Description |
|---|---|---|---|
root |
str | required | Root directory path |
mode |
"ro" | "rw" | "ro" | Access mode |
suffixes |
list[str] | None | None | Allowed file extensions (None = all) |
max_file_bytes |
int | None | None | Maximum file size limit |
write_approval |
bool | True | Require approval for writes |
read_approval |
bool | False | Require approval for reads |
Example Configuration
config = FileSandboxConfig(paths={
# Read-only input - any file type
"input": PathConfig(
root="./data/input",
mode="ro",
),
# Read-write output - only markdown and text
"output": PathConfig(
root="./data/output",
mode="rw",
suffixes=[".md", ".txt"],
max_file_bytes=1_000_000, # 1MB limit
),
# Config files - read-only, requires approval
"config": PathConfig(
root="./config",
mode="ro",
read_approval=True,
),
})
Available Tools
The sandbox provides three tools to the agent:
read_file
Read a text file from the sandbox.
Path format: 'sandbox_name/relative/path'
Parameters:
- path: str (required)
- max_chars: int (default: 20,000)
- offset: int (default: 0)
write_file
Write a text file to the sandbox (requires mode="rw").
Path format: 'sandbox_name/relative/path'
Parameters:
- path: str (required)
- content: str (required)
list_files
List files matching a glob pattern.
Parameters:
- path: str (default: "." for all sandboxes)
- pattern: str (default: "**/*")
LLM-Friendly Errors
All errors include guidance on what IS allowed:
# PathNotInSandboxError
"Cannot access 'secret/file.txt': path is outside sandbox.
Readable paths: input, output"
# PathNotWritableError
"Cannot write to 'input/file.txt': path is read-only.
Writable paths: output"
# SuffixNotAllowedError
"Cannot access 'output/data.json': suffix '.json' not allowed.
Allowed suffixes: .md, .txt"
# FileTooLargeError
"Cannot read 'output/huge.txt': file too large (5,000,000 bytes).
Maximum allowed: 1,000,000 bytes"
Approval Integration
Works with pydantic-ai-blocking-approval for human-in-the-loop:
from pydantic_ai_filesystem_sandbox import FileSandboxImpl, FileSandboxConfig, PathConfig
from pydantic_ai_blocking_approval import ApprovalToolset, ApprovalController
# Create sandbox
config = FileSandboxConfig(paths={
"output": PathConfig(root="./output", mode="rw", write_approval=True),
})
sandbox = FileSandboxImpl(config)
# Wrap with approval
controller = ApprovalController(mode="interactive", approval_callback=my_prompt_fn)
approved_sandbox = ApprovalToolset(
inner=sandbox,
prompt_fn=controller.approval_callback,
memory=controller.memory,
require_approval=["write_file", "read_file"],
)
agent = Agent(..., toolsets=[approved_sandbox])
The sandbox implements needs_approval() and present_for_approval() for fine-grained approval control.
ReadResult
The read_file tool returns a ReadResult object:
class ReadResult(BaseModel):
content: str # The file content read
truncated: bool # True if more content exists
total_chars: int # Total file size in characters
offset: int # Starting position used
chars_read: int # Characters actually returned
This allows agents to handle large files by reading in chunks:
# First read
result = sandbox.read("input/large.txt", max_chars=10000)
if result.truncated:
# Continue reading
result2 = sandbox.read("input/large.txt", max_chars=10000, offset=10000)
API Reference
Configuration
FileSandboxConfig- Top-level configuration with named pathsPathConfig- Configuration for a single sandbox path
Toolset
FileSandboxImpl- PydanticAI AbstractToolset implementation
Errors
FileSandboxError- Base class for all sandbox errorsPathNotInSandboxError- Path outside sandbox boundariesPathNotWritableError- Write to read-only pathSuffixNotAllowedError- File extension not allowedFileTooLargeError- File exceeds size limit
Types
ReadResult- Result of read operations with metadata
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file pydantic_ai_filesystem_sandbox-0.1.1.tar.gz.
File metadata
- Download URL: pydantic_ai_filesystem_sandbox-0.1.1.tar.gz
- Upload date:
- Size: 177.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.21
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1b66f3326f5692db8cf61dfe65d3a66e767002957c69c8e9e1d42906ef50644d
|
|
| MD5 |
c7e0c325ff320723fe3c8d4ed435104b
|
|
| BLAKE2b-256 |
a090224f9f95fe7059e0c8a4b09395805a1ae48394b2be26100d832a4d481bad
|
File details
Details for the file pydantic_ai_filesystem_sandbox-0.1.1-py3-none-any.whl.
File metadata
- Download URL: pydantic_ai_filesystem_sandbox-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.5.21
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
70cd9c4b73a12f3714ff93bdb9a5c20b9cc2e17ae98b544801c7ffb6afc7fea1
|
|
| MD5 |
d4190519b86db6ef2243c6ab8ca7511e
|
|
| BLAKE2b-256 |
433074e408a1da4eca8a90f64e3117251364531e8147dafe0230cf792f047c08
|