Skip to main content

Open-source agent simulation and benchmarking platform

Project description

Sandboxy

Open-source framework for developing, testing, and benchmarking AI agents in simulated environments.

What is Sandboxy?

image

Sandboxy provides a local development environment for building and testing AI agent scenarios. Define scenarios in YAML, run them against any LLM, and evaluate the results.

Use cases:

  • Agent Development - Build and iterate on AI agent behaviors locally
  • Evaluation & Testing - Run scenarios against models and score their performance
  • Dataset Benchmarking - Test models against datasets of cases with parallel execution
  • Red-teaming - Test for prompt injection, policy violations, and edge cases

Quick Start

Installation

# Using uv (recommended)
pip install uv
uv pip install sandboxy

# Or with pip
pip install sandboxy

Set up API keys

# Add your API key (OpenRouter gives access to 400+ models)
echo "OPENROUTER_API_KEY=your-key-here" >> .env

Initialize a project

mkdir my-evals && cd my-evals
sandboxy init

This creates:

my-evals/
├── scenarios/     # Your scenario YAML files
├── tools/         # Custom tool definitions
├── agents/        # Agent configurations (optional)
├── datasets/      # Test case datasets
└── runs/          # Output from runs

Run a scenario

# Run with a specific model
sandboxy run scenarios/my_scenario.yml -m openai/gpt-4o

# Compare multiple models
sandboxy run scenarios/my_scenario.yml -m openai/gpt-4o -m anthropic/claude-3.5-sonnet

# Run against a dataset
sandboxy run scenarios/my_scenario.yml --dataset datasets/cases.yml -m openai/gpt-4o

Local development UI

# Start the local dev server with UI
sandboxy open

Opens a browser with a local UI for browsing scenarios, running them, and viewing results.

Writing Scenarios

Scenarios are YAML files that define agent interactions. Sandboxy supports two modes:

Single-turn mode

Use prompt: for simple request/response scenarios without tool use:

id: simple-qa
name: "Simple Q&A"

system_prompt: |
  You are a helpful assistant.

prompt: |
  What is the capital of France?

evaluation:
  max_score: 100
  goals:
    - id: correct_answer
      name: "Correct Answer"
      points: 100
      detection:
        type: agent_contains
        patterns:
          - "Paris"

Agentic mode

Use steps: for multi-turn scenarios with tool support:

id: customer-support
name: "Customer Support Test"
description: "Test how an agent handles a refund request"

system_prompt: |
  You are a customer support agent for TechCo.
  Be helpful but follow company policy.

steps:
  - id: user_request
    action: inject_user
    params:
      content: "I want a refund for my purchase. Order #12345."
  - id: agent_response
    action: await_agent

# Tools are only available in agentic mode (with steps)
tools:
  lookup_order:
    description: "Look up order details"
    actions:
      call:
        params:
          order_id:
            type: string
            required: true
        returns: "Order details for {{order_id}}"

evaluation:
  max_score: 100
  goals:
    - id: acknowledged_request
      name: "Acknowledged Request"
      description: "Agent acknowledged the refund request"
      points: 50
      detection:
        type: agent_contains
        patterns:
          - "refund"

    - id: looked_up_order
      name: "Looked Up Order"
      description: "Agent used the lookup tool"
      points: 50
      detection:
        type: tool_called
        tool: lookup_order

CLI Reference

# Run scenarios
sandboxy run <file.yml> -m <model>           # Run a scenario
sandboxy run <file.yml> -m <model> --runs 5  # Multiple runs
sandboxy run <file.yml> --dataset <data.yml> # Run against dataset

# Development
sandboxy open                    # Start local UI
sandboxy serve                   # API server only (no browser)
sandboxy init                    # Initialize project structure

# Scaffolding
sandboxy new scenario <name>     # Create scenario template
sandboxy new tool <name>         # Create tool library template

# Information
sandboxy list-models             # List available models
sandboxy list-tools              # List available tool libraries
sandboxy info <file.yml>         # Show scenario details

# MCP Integration
sandboxy mcp inspect <command>   # Inspect MCP server tools
sandboxy mcp list                # List known MCP servers

Models

Sandboxy supports 400+ models via OpenRouter, plus direct provider access:

# OpenRouter models (recommended)
sandboxy run scenario.yml -m openai/gpt-4o
sandboxy run scenario.yml -m anthropic/claude-3.5-sonnet
sandboxy run scenario.yml -m google/gemini-pro
sandboxy run scenario.yml -m meta-llama/llama-3-70b

# List available models
sandboxy list-models
sandboxy list-models --search claude
sandboxy list-models --free

MLflow Integration

Export scenario run results to MLflow for experiment tracking and model comparison.

# Install with MLflow support
pip install sandboxy[mlflow]

# Export run to MLflow
sandboxy scenario scenarios/test.yml -m openai/gpt-4o --mlflow-export

# Custom experiment name
sandboxy scenario scenarios/test.yml -m gpt-4o --mlflow-export --mlflow-experiment "my-evals"

Or enable in scenario YAML:

id: my-scenario
name: "My Test"

mlflow:
  enabled: true
  experiment: "agent-evals"
  tags:
    team: "support"

system_prompt: |
  ...

See MLFLOW_TRACKING_URI env variable to configure the MLflow server.

Configuration

Environment variables (in ~/.sandboxy/.env or project .env):

Variable Description
OPENROUTER_API_KEY OpenRouter API key (400+ models)
OPENAI_API_KEY Direct OpenAI access
ANTHROPIC_API_KEY Direct Anthropic access
MLFLOW_TRACKING_URI MLflow tracking server URI

Project Structure

sandboxy/
├── sandboxy/           # Python package
│   ├── core/           # Runner, state management
│   ├── scenarios/      # Unified scenario runner
│   ├── datasets/       # Dataset benchmarking
│   ├── agents/         # Agent loading and execution
│   ├── tools/          # Tool loading (YAML tools)
│   ├── providers/      # LLM provider integrations
│   ├── api/            # Local dev API server
│   ├── cli/            # Command-line interface
│   ├── local/          # Local project context
│   └── mcp/            # MCP client integration
└── local-ui/           # Local development UI (React)

Contributing

Contributions welcome! See CONTRIBUTING.md.

License

Apache 2.0 - see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sandboxy-0.0.6.tar.gz (446.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sandboxy-0.0.6-py3-none-any.whl (275.3 kB view details)

Uploaded Python 3

File details

Details for the file sandboxy-0.0.6.tar.gz.

File metadata

  • Download URL: sandboxy-0.0.6.tar.gz
  • Upload date:
  • Size: 446.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandboxy-0.0.6.tar.gz
Algorithm Hash digest
SHA256 0dec70904d90224a8d2c38572a22cae0297c779b563b8fbda7c29b7eeee55d7f
MD5 178986853fbb6fe822e231408c20a289
BLAKE2b-256 2c4efac9b3ea6c4e0b55889f110c7d7a7b1b6cbdf69d61f22074a6d99a5d4d73

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandboxy-0.0.6.tar.gz:

Publisher: publish.yml on sandboxy-ai/sandboxy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sandboxy-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: sandboxy-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 275.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandboxy-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 4c55d8170a1667ba9cfd717707cf74a28a36cc0e4781d81ea0f1bbbae58d642f
MD5 1e2fe8fb89a77868781f9fe82a263cf6
BLAKE2b-256 c32f4f92d53f9358045cee9e759df8c6b493ac80cb4fdf2e48173bae9ddeedf0

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandboxy-0.0.6-py3-none-any.whl:

Publisher: publish.yml on sandboxy-ai/sandboxy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page