Skip to main content

Open-source agent simulation and benchmarking platform

Project description

Sandboxy

Open-source framework for developing, testing, and benchmarking AI agents in simulated environments.

What is Sandboxy?

image

Sandboxy provides a local development environment for building and testing AI agent scenarios. Define scenarios in YAML, run them against any LLM, and evaluate the results.

Use cases:

  • Agent Development - Build and iterate on AI agent behaviors locally
  • Evaluation & Testing - Run scenarios against models and score their performance
  • Dataset Benchmarking - Test models against datasets of cases with parallel execution
  • Red-teaming - Test for prompt injection, policy violations, and edge cases

Quick Start

Installation

# Using uv (recommended)
pip install uv
uv pip install sandboxy

# Or with pip
pip install sandboxy

Set up API keys

# Add your API key (OpenRouter gives access to 400+ models)
echo "OPENROUTER_API_KEY=your-key-here" >> .env

Initialize a project

mkdir my-evals && cd my-evals
sandboxy init

This creates:

my-evals/
├── scenarios/     # Your scenario YAML files
├── tools/         # Custom tool definitions
├── agents/        # Agent configurations (optional)
├── datasets/      # Test case datasets
└── runs/          # Output from runs

Run a scenario

# Run with a specific model
sandboxy run scenarios/my_scenario.yml -m openai/gpt-4o

# Compare multiple models
sandboxy run scenarios/my_scenario.yml -m openai/gpt-4o -m anthropic/claude-3.5-sonnet

# Run against a dataset
sandboxy run scenarios/my_scenario.yml --dataset datasets/cases.yml -m openai/gpt-4o

Local development UI

# Start the local dev server with UI
sandboxy open

Opens a browser with a local UI for browsing scenarios, running them, and viewing results.

Writing Scenarios

Scenarios are YAML files that define agent interactions. Sandboxy supports two modes:

Single-turn mode

Use prompt: for simple request/response scenarios without tool use:

id: simple-qa
name: "Simple Q&A"

system_prompt: |
  You are a helpful assistant.

prompt: |
  What is the capital of France?

evaluation:
  max_score: 100
  goals:
    - id: correct_answer
      name: "Correct Answer"
      points: 100
      detection:
        type: agent_contains
        patterns:
          - "Paris"

Agentic mode

Use steps: for multi-turn scenarios with tool support:

id: customer-support
name: "Customer Support Test"
description: "Test how an agent handles a refund request"

system_prompt: |
  You are a customer support agent for TechCo.
  Be helpful but follow company policy.

steps:
  - id: user_request
    action: inject_user
    params:
      content: "I want a refund for my purchase. Order #12345."
  - id: agent_response
    action: await_agent

# Tools are only available in agentic mode (with steps)
tools:
  lookup_order:
    description: "Look up order details"
    actions:
      call:
        params:
          order_id:
            type: string
            required: true
        returns: "Order details for {{order_id}}"

evaluation:
  max_score: 100
  goals:
    - id: acknowledged_request
      name: "Acknowledged Request"
      description: "Agent acknowledged the refund request"
      points: 50
      detection:
        type: agent_contains
        patterns:
          - "refund"

    - id: looked_up_order
      name: "Looked Up Order"
      description: "Agent used the lookup tool"
      points: 50
      detection:
        type: tool_called
        tool: lookup_order

CLI Reference

# Run scenarios
sandboxy run <file.yml> -m <model>           # Run a scenario
sandboxy run <file.yml> -m <model> --runs 5  # Multiple runs
sandboxy run <file.yml> --dataset <data.yml> # Run against dataset

# Development
sandboxy open                    # Start local UI
sandboxy serve                   # API server only (no browser)
sandboxy init                    # Initialize project structure

# Scaffolding
sandboxy new scenario <name>     # Create scenario template
sandboxy new tool <name>         # Create tool library template

# Information
sandboxy list-models             # List available models
sandboxy list-tools              # List available tool libraries
sandboxy info <file.yml>         # Show scenario details

# MCP Integration
sandboxy mcp inspect <command>   # Inspect MCP server tools
sandboxy mcp list                # List known MCP servers

Models

Sandboxy supports 400+ models via OpenRouter, plus direct provider access:

# OpenRouter models (recommended)
sandboxy run scenario.yml -m openai/gpt-4o
sandboxy run scenario.yml -m anthropic/claude-3.5-sonnet
sandboxy run scenario.yml -m google/gemini-pro
sandboxy run scenario.yml -m meta-llama/llama-3-70b

# List available models
sandboxy list-models
sandboxy list-models --search claude
sandboxy list-models --free

MLflow Integration

Export scenario run results to MLflow for experiment tracking and model comparison.

# Install with MLflow support
pip install sandboxy[mlflow]

# Export run to MLflow
sandboxy scenario scenarios/test.yml -m openai/gpt-4o --mlflow-export

# Custom experiment name
sandboxy scenario scenarios/test.yml -m gpt-4o --mlflow-export --mlflow-experiment "my-evals"

Or enable in scenario YAML:

id: my-scenario
name: "My Test"

mlflow:
  enabled: true
  experiment: "agent-evals"
  tags:
    team: "support"

system_prompt: |
  ...

See MLFLOW_TRACKING_URI env variable to configure the MLflow server.

Configuration

Environment variables (in ~/.sandboxy/.env or project .env):

Variable Description
OPENROUTER_API_KEY OpenRouter API key (400+ models)
OPENAI_API_KEY Direct OpenAI access
ANTHROPIC_API_KEY Direct Anthropic access
MLFLOW_TRACKING_URI MLflow tracking server URI

Project Structure

sandboxy/
├── sandboxy/           # Python package
│   ├── core/           # Runner, state management
│   ├── scenarios/      # Unified scenario runner
│   ├── datasets/       # Dataset benchmarking
│   ├── agents/         # Agent loading and execution
│   ├── tools/          # Tool loading (YAML tools)
│   ├── providers/      # LLM provider integrations
│   ├── api/            # Local dev API server
│   ├── cli/            # Command-line interface
│   ├── local/          # Local project context
│   └── mcp/            # MCP client integration
└── local-ui/           # Local development UI (React)

Contributing

Contributions welcome! See CONTRIBUTING.md.

License

Apache 2.0 - see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sandboxy-0.0.8.tar.gz (523.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sandboxy-0.0.8-py3-none-any.whl (277.5 kB view details)

Uploaded Python 3

File details

Details for the file sandboxy-0.0.8.tar.gz.

File metadata

  • Download URL: sandboxy-0.0.8.tar.gz
  • Upload date:
  • Size: 523.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandboxy-0.0.8.tar.gz
Algorithm Hash digest
SHA256 69c7eae66b0d1e2c4edf97e7351e397a24b85536c02d8e2ca8477b79a285772d
MD5 a8a30b4012ba243e9a2cdc94a2e32ede
BLAKE2b-256 df8fe5a77eec092eb0c26d16ed491b95c4f8360d9bdc7ad41b5c899f04786fb3

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandboxy-0.0.8.tar.gz:

Publisher: publish.yml on sandboxy-ai/sandboxy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file sandboxy-0.0.8-py3-none-any.whl.

File metadata

  • Download URL: sandboxy-0.0.8-py3-none-any.whl
  • Upload date:
  • Size: 277.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for sandboxy-0.0.8-py3-none-any.whl
Algorithm Hash digest
SHA256 aa522873ddf61be8d92d89791ebf4b5a154e48ef63c2a046efe8c1cff63100b6
MD5 9f630088ca58bee9c1648564c69718d5
BLAKE2b-256 d63449b08036131c6e2cecc370d25d13e031a237eb7c1748fd3994969192d0ae

See more details on using hashes here.

Provenance

The following attestation bundles were made for sandboxy-0.0.8-py3-none-any.whl:

Publisher: publish.yml on sandboxy-ai/sandboxy

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page