Skip to main content

Local sandbox client for Firecracker/Docker execution with E2B/Daytona ergonomics

Project description

Firework 🎆

A high-level, local-only client library for managing Firecracker/Docker sandboxes. No authentication, no remote APIs – just pure local execution with E2B/Daytona ergonomics.

PyPI version Python 3.10+ License: MIT

Features

  • Zero-config by default: Works out-of-the-box with sensible local defaults
  • Backend agnostic: Abstracts Firecracker, Docker, or custom runtimes
  • Async-first: All operations are non-blocking with streaming support
  • Resource-aware: Automatic cleanup, lifecycle management, metrics
  • Pre-built environments: Python ML, PyTorch, TensorFlow, Node.js, and more
  • No network required: Everything runs on localhost

Installation

pip install firework-sandbox

No API keys, no daemon required.

Quick Start

import asyncio
from firework import Sandbox

async def main():
    # Create a sandbox with auto-cleanup
    async with Sandbox.create(template="python-ml") as sandbox:
        # Install packages
        await sandbox.process.exec("pip install numpy pandas")
        
        # Execute Python code
        result = await sandbox.process.exec("python -c 'import numpy; print(numpy.__version__)'")
        print(result.stdout)  # "1.24.0\n"
        
        # Upload a file
        await sandbox.filesystem.write("/app/script.py", """
import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
df.to_csv('/app/output.csv', index=False)
print(df.describe())
""")
        
        # Run the script
        result = await sandbox.process.exec("python /app/script.py")
        print(result.stdout)
        
        # Download the result
        await sandbox.filesystem.download("/app/output.csv", "./output.csv")

asyncio.run(main())

Pre-built Environments

Firework comes with pre-configured environments for common use cases:

Environment Description Packages
base Minimal Python 3.11 pip, setuptools
python-ml Machine Learning numpy, pandas, scikit-learn, matplotlib
python-torch Deep Learning (PyTorch) torch, torchvision, numpy
python-tensorflow Deep Learning (TensorFlow) tensorflow, keras, numpy
python-data Data Engineering polars, duckdb, pyarrow, sqlalchemy
python-web Web Development fastapi, uvicorn, httpx, pydantic
python-llm LLM/AI Applications openai, anthropic, langchain, transformers
nodejs Node.js 20 Node.js runtime
nodejs-full Node.js with tools typescript, ts-node
# Use a specific environment
sandbox = await Sandbox.create(template="python-torch")

# List all available environments
from firework import list_environments
for env in list_environments():
    print(f"{env['name']}: {env['description']}")

Sandbox Lifecycle

Create

from firework import Sandbox

# Minimal (recommended)
sandbox = await Sandbox.create(template="base")

# Full control
sandbox = await Sandbox.create(
    template="python-ml",
    name="data-analysis-123",
    environment={"DEBUG": "true", "PYTHONUNBUFFERED": "1"},
    vcpu=2,
    memory_mb=1024,
    timeout_seconds=3600,
)

print(f"ID: {sandbox.id}")           # sbx_abc123...
print(f"Root: {sandbox.root_path}")  # ~/.firework/sandboxes/sbx_abc123

Reconnect to Existing Sandbox

# Reconnect to a running sandbox
sandbox = await Sandbox.reconnect("sbx_abc123")

State Management

# Check state
state = await sandbox.get_state()  # "running", "paused", "stopped"

# Pause/Resume (saves resources)
await sandbox.pause()
await sandbox.resume()

# Destroy (cleanup)
await sandbox.destroy()

Auto-cleanup with Context Manager

async with Sandbox.create(template="base") as sandbox:
    result = await sandbox.process.exec("echo Hello")
    # Automatically destroyed on exit

Process Execution

Blocking Execution

result = await sandbox.process.exec(
    command="python /app/analyze.py",
    cwd="/workspace",
    environment={"INPUT": "/data/file.csv"},
    timeout_seconds=30
)

print(result.stdout)
print(result.stderr)
print(f"Exit code: {result.exit_code}")
print(f"Runtime: {result.runtime_seconds}s")

Streaming Execution

stream = await sandbox.process.exec_stream(
    command="python long_running.py",
    cwd="/app"
)

async for event in stream:
    match event.type:
        case "stdout":
            print(f"OUT: {event.content}", end="")
        case "stderr":
            print(f"ERR: {event.content}", end="")
        case "exit":
            print(f"DONE: exit code {event.exit_code}")

Background Processes

# Start a server in the background
server = await sandbox.process.start(
    command="python -m http.server 8000",
    cwd="/public",
    background=True
)

# Check if running
running = await server.is_running()

# Wait for completion or kill
try:
    exit_code = await server.wait(timeout_seconds=60)
except asyncio.TimeoutError:
    await server.kill()

Batch Execution

# Run multiple commands sequentially
results = await sandbox.process.batch_exec([
    "pip install -r requirements.txt",
    "python setup.py",
    "python train.py",
    "python evaluate.py"
], stop_on_error=True)

for i, result in enumerate(results):
    print(f"Command {i+1}: exit code {result.exit_code}")

Filesystem Operations

Read/Write Files

# Read text
content = await sandbox.filesystem.read("/config.json")

# Read binary
data = await sandbox.filesystem.read_bytes("/model.bin")

# Write text
await sandbox.filesystem.write("/output.txt", "Hello World")

# Write binary
await sandbox.filesystem.write_bytes("/model.weights", model_bytes)

Upload/Download Files

# Upload single file
await sandbox.filesystem.upload("./local.csv", "/data/input.csv")

# Download single file
await sandbox.filesystem.download("/data/output.json", "./result.json")

# Upload entire directory
await sandbox.filesystem.upload_dir("./project", "/workspace")

# Download entire directory
await sandbox.filesystem.download_dir("/results", "./local_results")

Directory Operations

# List directory contents
files = await sandbox.filesystem.list("/data")
for f in files:
    print(f"{f.name}: {f.size} bytes ({f.type})")

# Create directory
await sandbox.filesystem.mkdir("/workspace/output", recursive=True)

# Remove file
await sandbox.filesystem.remove("/tmp/cache.txt")

# Remove directory
await sandbox.filesystem.remove_dir("/tmp/old_data")

# Check existence
exists = await sandbox.filesystem.exists("/app/script.py")

Observability

Sandbox Metrics

metrics = await sandbox.get_metrics()
print(f"CPU: {metrics.cpu_percent}%")
print(f"Memory: {metrics.memory_mb} MB")
print(f"Disk: {metrics.disk_mb_used} MB")
print(f"Uptime: {metrics.uptime_seconds}s")
print(f"Processes: {metrics.process_count}")

Events

def on_created(sandbox):
    print(f"Sandbox created: {sandbox.id}")

def on_destroyed(sandbox):
    print(f"Sandbox destroyed: {sandbox.id}")

sandbox.on("created", on_created)
sandbox.on("destroyed", on_destroyed)

Metrics System (Host-Level Monitoring)

Firework includes a comprehensive metrics and watchdog system for monitoring sandbox hosts:

from firework.metrics import MetricsSystem, MetricsConfig
from firework.backend import get_backend

# Configure metrics
config = MetricsConfig(
    metrics_port=9090,
    memory_pressure_threshold=0.85,
    idle_timeout_minutes=30,
    cloudwatch_enabled=False,
)

# Start metrics system
backend = get_backend()
metrics_system = MetricsSystem(backend, config)
await metrics_system.start()

# Access components
collector = metrics_system.collector
watchdog = metrics_system.watchdog

# Register alert callbacks
watchdog.on_alert(lambda alert: print(f"Alert: {alert.alert_type}"))

# Get current metrics
host_metrics = collector.get_latest_metrics()
print(f"Active sandboxes: {host_metrics.sandbox_active_count}")
print(f"Capacity score: {host_metrics.capacity_score}")

# Stop when done
await metrics_system.stop()

Prometheus Metrics Endpoint

The metrics exporter exposes a /metrics endpoint in Prometheus format:

# Start metrics server via CLI
firework metrics start --port 9090

# Or check current status
firework metrics status

Available metrics:

  • firework_sandbox_active_count - Number of active sandboxes
  • firework_sandbox_capacity_total - Maximum sandbox capacity
  • firework_host_cpu_percent - Host CPU utilization
  • firework_host_memory_percent - Host memory utilization
  • firework_capacity_score - Available capacity (0-1)
  • firework_sandbox_creation_duration_seconds - Histogram of creation times
  • firework_sandbox_command_duration_seconds - Histogram of command execution times

Watchdog Alerts

The watchdog monitors for unhealthy conditions:

  • memory_pressure - Host memory exceeds threshold (default 85%)
  • capacity_exhausted - No available sandbox slots
  • idle_sandbox - Sandbox idle beyond timeout (default 30 min)

CloudWatch Integration

Enable CloudWatch publishing for ASG integration:

config = MetricsConfig(
    cloudwatch_enabled=True,
    capacity_low_threshold=0.2,  # Trigger alarm when < 20% capacity
)

Publishes: CapacityScore, SandboxActiveCount, CapacityLow alarm metric.

Error Handling

from firework import (
    SandboxError,
    SandboxNotFound,
    SandboxTimeout,
    ProcessExecutionError,
    FilesystemError
)

try:
    result = await sandbox.process.exec("nonexistent_command")
except ProcessExecutionError as e:
    print(f"Command failed: exit code {e.exit_code}")
    print(f"Stderr: {e.stderr}")
except SandboxTimeout as e:
    print(f"Operation timed out after {e.timeout_seconds}s")
except SandboxNotFound as e:
    print(f"Sandbox not found: {e.sandbox_id}")
except FilesystemError as e:
    print(f"Filesystem error: {e.operation} on {e.path}")
except SandboxError as e:
    print(f"General sandbox error: {e}")

CLI

# Create a sandbox
firework create --template python-ml --name my-sandbox

# List running sandboxes
firework list

# Execute command in sandbox
firework exec sbx_abc123 "python --version"

# Destroy sandbox
firework destroy sbx_abc123

# List available environments
firework env list

# Show environment details
firework env info python-ml

# Build an environment
firework env build python-torch --size 4096

# Start metrics server (Prometheus endpoint)
firework metrics start --port 9090 --host 0.0.0.0

# Show current metrics
firework metrics status

Configuration

Programmatic Configuration

from firework import LocalConfig, set_config

config = LocalConfig(
    runtime_dir="/custom/path/sandboxes",
    env_dir="/custom/path/environments",
    default_template="python-ml",
    default_timeout=120,
    default_vcpu=2,
    default_memory_mb=1024,
    log_level="DEBUG"
)
set_config(config)

Environment Variables

Variable Description Default
FIREWORK_RUNTIME_DIR Sandbox runtime directory ~/.firework/sandboxes
FIREWORK_ENV_DIR Built environments directory ~/.firework/environments
FIREWORK_DEFAULT_TEMPLATE Default template base
FIREWORK_DEFAULT_TIMEOUT Default timeout (seconds) 60
FIREWORK_DEFAULT_VCPU Default vCPU count 1
FIREWORK_DEFAULT_MEMORY_MB Default memory (MB) 512
FIREWORK_LOG_LEVEL Log level INFO

Metrics Configuration

Variable Description Default
METRICS_COLLECTION_INTERVAL_SECONDS Collection interval 15
METRICS_PORT Prometheus endpoint port 9090
METRICS_HOST Prometheus endpoint host 0.0.0.0
CAPACITY_LOW_THRESHOLD Capacity alarm threshold 0.2
MEMORY_PRESSURE_THRESHOLD Memory alert threshold 0.85
IDLE_TIMEOUT_MINUTES Idle sandbox timeout 30
CLOUDWATCH_ENABLED Enable CloudWatch publishing false
export FIREWORK_RUNTIME_DIR=/data/sandboxes
export FIREWORK_DEFAULT_MEMORY_MB=2048
export METRICS_PORT=9090

Complete Examples

Data Analysis Pipeline

async def analyze_csv(csv_path: str) -> dict:
    async with Sandbox.create(template="python-ml") as sandbox:
        # Upload data
        await sandbox.filesystem.upload(csv_path, "/data/input.csv")
        
        # Write analysis script
        await sandbox.filesystem.write("/app/analyze.py", """
import pandas as pd
import json

df = pd.read_csv('/data/input.csv')
results = {
    'rows': len(df),
    'columns': list(df.columns),
    'summary': df.describe().to_dict()
}

with open('/data/results.json', 'w') as f:
    json.dump(results, f, indent=2)

print('Analysis complete!')
""")
        
        # Run analysis
        result = await sandbox.process.exec("python /app/analyze.py")
        print(result.stdout)
        
        # Download results
        await sandbox.filesystem.download("/data/results.json", "./results.json")
        
        # Read and return results
        content = await sandbox.filesystem.read("/data/results.json")
        return json.loads(content)

Machine Learning Training

async def train_model(data_path: str, epochs: int = 10):
    async with Sandbox.create(
        template="python-torch",
        vcpu=2,
        memory_mb=2048,
        timeout_seconds=3600
    ) as sandbox:
        # Upload training data
        await sandbox.filesystem.upload_dir(data_path, "/data")
        
        # Upload training script
        await sandbox.filesystem.upload("./train.py", "/app/train.py")
        
        # Train with streaming output
        stream = await sandbox.process.exec_stream(
            f"python /app/train.py --epochs {epochs}",
            cwd="/app"
        )
        
        async for event in stream:
            if event.type == "stdout":
                print(event.content, end="")
        
        # Download trained model
        await sandbox.filesystem.download("/app/model.pt", "./model.pt")

Web Scraping

async def scrape_urls(urls: list[str]) -> list[dict]:
    async with Sandbox.create(template="python-web") as sandbox:
        # Install additional packages
        await sandbox.process.exec("pip install beautifulsoup4 lxml")
        
        # Write scraper script
        await sandbox.filesystem.write("/app/scraper.py", f"""
import httpx
from bs4 import BeautifulSoup
import json

urls = {urls}
results = []

for url in urls:
    try:
        resp = httpx.get(url, timeout=10)
        soup = BeautifulSoup(resp.text, 'lxml')
        results.append({{
            'url': url,
            'title': soup.title.string if soup.title else None,
            'status': resp.status_code
        }})
    except Exception as e:
        results.append({{'url': url, 'error': str(e)}})

with open('/app/results.json', 'w') as f:
    json.dump(results, f, indent=2)
""")
        
        # Run scraper
        await sandbox.process.exec("python /app/scraper.py", timeout_seconds=120)
        
        # Get results
        content = await sandbox.filesystem.read("/app/results.json")
        return json.loads(content)

Parallel Processing with Worker Pool

async def parallel_process(items: list[str]) -> list[str]:
    # Create a pool of sandboxes
    pool = await asyncio.gather(*[
        Sandbox.create(template="base") for _ in range(4)
    ])
    
    try:
        # Distribute work across pool
        async def process_item(sandbox, item):
            result = await sandbox.process.exec(f"echo 'Processing: {item}'")
            return result.stdout.strip()
        
        tasks = [
            process_item(pool[i % len(pool)], item)
            for i, item in enumerate(items)
        ]
        
        return await asyncio.gather(*tasks)
    finally:
        # Cleanup all sandboxes
        await asyncio.gather(*(s.destroy() for s in pool))

Requirements

  • Python 3.10+
  • Docker (for Docker backend)
  • Firecracker (optional, for microVM backend)

License

MIT License - see LICENSE for details.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

firework_sandbox-0.2.0.tar.gz (37.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

firework_sandbox-0.2.0-py3-none-any.whl (36.4 kB view details)

Uploaded Python 3

File details

Details for the file firework_sandbox-0.2.0.tar.gz.

File metadata

  • Download URL: firework_sandbox-0.2.0.tar.gz
  • Upload date:
  • Size: 37.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for firework_sandbox-0.2.0.tar.gz
Algorithm Hash digest
SHA256 b4e0fa2acf94e1be7ba52c2eb0517f1002b31a868995ad24295d2891d3801020
MD5 54c58dcd8cedf4ff306bf7324eae80fc
BLAKE2b-256 6aaf84b984ab4f4997ff7d4dbeb08ef065431fad1a0e178d7a03b8be2a0f17de

See more details on using hashes here.

File details

Details for the file firework_sandbox-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for firework_sandbox-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1ae8a57b1c8b88dc1d2392ec720fc2043bbe29850f360c4bd6460619b493708c
MD5 09c9333efddd7fc0e1802125976fe34a
BLAKE2b-256 5594b402cb80f2b74e4d67603265235a1ff9c94681d3970f95d645e232e8305d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page