Skip to main content

Open-source agent simulation and benchmarking platform

Project description

Sandboxy

Open-source framework for developing, testing, and benchmarking AI agents in simulated environments.

What is Sandboxy?

image

Sandboxy provides a local development environment for building and testing AI agent scenarios. Define scenarios in YAML, run them against any LLM, and evaluate the results.

Use cases:

  • Agent Development - Build and iterate on AI agent behaviors locally
  • Evaluation & Testing - Run scenarios against models and score their performance
  • Dataset Benchmarking - Test models against datasets of cases with parallel execution
  • Red-teaming - Test for prompt injection, policy violations, and edge cases

Quick Start

Installation

# Using uv (recommended)
pip install uv
uv pip install sandboxy

# Or with pip
pip install sandboxy

Set up API keys

# Add your API key (OpenRouter gives access to 400+ models)
echo "OPENROUTER_API_KEY=your-key-here" >> .env

Initialize a project

mkdir my-evals && cd my-evals
sandboxy init

This creates:

my-evals/
├── scenarios/     # Your scenario YAML files
├── tools/         # Custom tool definitions
├── agents/        # Agent configurations (optional)
├── datasets/      # Test case datasets
└── runs/          # Output from runs

Run a scenario

# Run with a specific model
sandboxy run scenarios/my_scenario.yml -m openai/gpt-4o

# Compare multiple models
sandboxy run scenarios/my_scenario.yml -m openai/gpt-4o -m anthropic/claude-3.5-sonnet

# Run against a dataset
sandboxy run scenarios/my_scenario.yml --dataset datasets/cases.yml -m openai/gpt-4o

Local development UI

# Start the local dev server with UI
sandboxy open

Opens a browser with a local UI for browsing scenarios, running them, and viewing results.

Writing Scenarios

Scenarios are YAML files that define agent interactions. Sandboxy supports two modes:

Single-turn mode

Use prompt: for simple request/response scenarios without tool use:

id: simple-qa
name: "Simple Q&A"

system_prompt: |
  You are a helpful assistant.

prompt: |
  What is the capital of France?

evaluation:
  max_score: 100
  goals:
    - id: correct_answer
      name: "Correct Answer"
      points: 100
      detection:
        type: agent_contains
        patterns:
          - "Paris"

Agentic mode

Use steps: for multi-turn scenarios with tool support:

id: customer-support
name: "Customer Support Test"
description: "Test how an agent handles a refund request"

system_prompt: |
  You are a customer support agent for TechCo.
  Be helpful but follow company policy.

steps:
  - id: user_request
    action: inject_user
    params:
      content: "I want a refund for my purchase. Order #12345."
  - id: agent_response
    action: await_agent

# Tools are only available in agentic mode (with steps)
tools:
  lookup_order:
    description: "Look up order details"
    actions:
      call:
        params:
          order_id:
            type: string
            required: true
        returns: "Order details for {{order_id}}"

evaluation:
  max_score: 100
  goals:
    - id: acknowledged_request
      name: "Acknowledged Request"
      description: "Agent acknowledged the refund request"
      points: 50
      detection:
        type: agent_contains
        patterns:
          - "refund"

    - id: looked_up_order
      name: "Looked Up Order"
      description: "Agent used the lookup tool"
      points: 50
      detection:
        type: tool_called
        tool: lookup_order

CLI Reference

# Run scenarios
sandboxy run <file.yml> -m <model>           # Run a scenario
sandboxy run <file.yml> -m <model> --runs 5  # Multiple runs
sandboxy run <file.yml> --dataset <data.yml> # Run against dataset

# Development
sandboxy open                    # Start local UI
sandboxy serve                   # API server only (no browser)
sandboxy init                    # Initialize project structure

# Scaffolding
sandboxy new scenario <name>     # Create scenario template
sandboxy new tool <name>         # Create tool library template

# Information
sandboxy list-models             # List available models
sandboxy list-tools              # List available tool libraries
sandboxy info <file.yml>         # Show scenario details

# MCP Integration
sandboxy mcp inspect <command>   # Inspect MCP server tools
sandboxy mcp list                # List known MCP servers

Models

Sandboxy supports 400+ models via OpenRouter, plus direct provider access:

# OpenRouter models (recommended)
sandboxy run scenario.yml -m openai/gpt-4o
sandboxy run scenario.yml -m anthropic/claude-3.5-sonnet
sandboxy run scenario.yml -m google/gemini-pro
sandboxy run scenario.yml -m meta-llama/llama-3-70b

# List available models
sandboxy list-models
sandboxy list-models --search claude
sandboxy list-models --free

MLflow Integration

Export scenario run results to MLflow for experiment tracking and model comparison.

# Install with MLflow support
pip install sandboxy[mlflow]

# Export run to MLflow
sandboxy scenario scenarios/test.yml -m openai/gpt-4o --mlflow-export

# Custom experiment name
sandboxy scenario scenarios/test.yml -m gpt-4o --mlflow-export --mlflow-experiment "my-evals"

Or enable in scenario YAML:

id: my-scenario
name: "My Test"

mlflow:
  enabled: true
  experiment: "agent-evals"
  tags:
    team: "support"

system_prompt: |
  ...

See MLFLOW_TRACKING_URI env variable to configure the MLflow server.

Configuration

Environment variables (in ~/.sandboxy/.env or project .env):

Variable Description
OPENROUTER_API_KEY OpenRouter API key (400+ models)
OPENAI_API_KEY Direct OpenAI access
ANTHROPIC_API_KEY Direct Anthropic access
MLFLOW_TRACKING_URI MLflow tracking server URI

Project Structure

sandboxy/
├── sandboxy/           # Python package
│   ├── core/           # Runner, state management
│   ├── scenarios/      # Unified scenario runner
│   ├── datasets/       # Dataset benchmarking
│   ├── agents/         # Agent loading and execution
│   ├── tools/          # Tool loading (YAML tools)
│   ├── providers/      # LLM provider integrations
│   ├── api/            # Local dev API server
│   ├── cli/            # Command-line interface
│   ├── local/          # Local project context
│   └── mcp/            # MCP client integration
└── local-ui/           # Local development UI (React)

Contributing

Contributions welcome! See CONTRIBUTING.md.

License

Apache 2.0 - see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sandboxy-0.0.5.tar.gz (439.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sandboxy-0.0.5-py3-none-any.whl (272.5 kB view details)

Uploaded Python 3

File details

Details for the file sandboxy-0.0.5.tar.gz.

File metadata

  • Download URL: sandboxy-0.0.5.tar.gz
  • Upload date:
  • Size: 439.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandboxy-0.0.5.tar.gz
Algorithm Hash digest
SHA256 513830b07a91a963c564f064229759d319bd3a672fb4c3f81eb3b6934939a014
MD5 9ea8ed13c405e54da59b3e4ef39ff49a
BLAKE2b-256 1232024f9f9190d8ebb2b8912972e2ced68319cf0aba6ca2ff67abb9a6adbd90

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandboxy-0.0.5.tar.gz:

Publisher: publish.yml on sandboxy-ai/sandboxy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sandboxy-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: sandboxy-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 272.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandboxy-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 087e67820d57f68355c2ac6f0572fd8e47c5eca34d917d8c3f97753558852f22
MD5 9eb39b71ed30ed2c9f9b107a4e53ad4a
BLAKE2b-256 95cbd73d9ada64342f0cc5f392809a57df2747b3655afef734cf30b246a2c633

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandboxy-0.0.5-py3-none-any.whl:

Publisher: publish.yml on sandboxy-ai/sandboxy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page