SDK for Chalk sandboxes, containers, and volumes
Project description
Chalk Sandbox SDK
Python SDK for the Chalk Sandbox gRPC service. Create sandboxes, execute commands, and stream output over bidirectional gRPC streams.
Install
pip install grpcio protobuf
Quick start
from sandbox import SandboxClient
client = SandboxClient("localhost:50051")
# Create a sandbox from a pre-built image
sandbox = client.create(image="ubuntu:latest", name="my-sandbox")
# Run a command
result = sandbox.exec("echo", "hello world")
print(result.stdout_text) # "hello world"
print(result.exit_code) # 0
# Clean up
sandbox.terminate()
Declarative images
Build custom container images with a fluent API instead of writing Dockerfiles. The image spec is serialized as protobuf and transmitted to the sandbox service, which builds and caches the image before starting the container.
from sandbox import SandboxClient
from image import Image
client = SandboxClient("localhost:50051")
# Build a data-science image declaratively
img = (
Image.debian_slim("3.12")
.pip_install(["pandas", "numpy", "scikit-learn"])
.run_commands(
"apt-get update && apt-get install -y git curl",
)
.workdir("/home/user/app")
.env({"PYTHONDONTWRITEBYTECODE": "1"})
)
sandbox = client.create(image=img, name="data-science")
result = sandbox.exec("python", "-c", "import pandas; print(pandas.__version__)")
print(result.stdout_text)
Base images
# Arbitrary base image
img = Image.base("node:22-slim")
# Convenience: python + debian slim
img = Image.debian_slim("3.12") # python:3.12-slim-bookworm
# From an existing Dockerfile (contents are inlined, so you can chain more steps)
img = Image.from_dockerfile("Dockerfile").pip_install(["extra-dep"])
Build steps
img = (
Image.debian_slim("3.12")
# Install Python packages
.pip_install(["requests>=2.28", "flask"])
# Install from a requirements.txt (read locally, inlined into the spec)
.pip_install_from_requirements("requirements.txt")
# Run shell commands (each becomes a Docker RUN layer)
.run_commands(
"apt-get update && apt-get install -y git",
"mkdir -p /app/data",
)
# Add local files into the image
.add_local_file("config.yaml", "/app/config.yaml")
.add_local_file("entrypoint.sh", "/app/entrypoint.sh", mode=0o755)
.add_local_dir("src", "/app/src")
# Raw Dockerfile instructions
.dockerfile_commands(["EXPOSE 8080", "HEALTHCHECK CMD curl -f http://localhost:8080/"])
# Image-level configuration
.workdir("/app")
.env({"FLASK_APP": "app:create_app"})
.entrypoint(["/app/entrypoint.sh"])
.cmd(["serve"])
)
Immutable composition
Each builder method returns a new Image, so intermediate images can be shared:
base = Image.debian_slim("3.12").pip_install(["requests"])
# Two different images that share the same base
api_image = base.pip_install(["flask"]).workdir("/api")
worker_image = base.pip_install(["celery"]).workdir("/worker")
api_sandbox = client.create(image=api_image, name="api")
worker_sandbox = client.create(image=worker_image, name="worker")
Connecting
from sandbox import SandboxClient
import grpc
# Insecure (local dev)
client = SandboxClient("localhost:50051")
# With TLS
creds = grpc.ssl_channel_credentials()
client = SandboxClient("sandbox.example.com:443", credentials=creds)
# As a context manager
with SandboxClient("localhost:50051") as client:
...
Sandbox lifecycle
# Create with resource limits
sandbox = client.create(
image="ubuntu:latest",
name="dev-sandbox",
cpu="2",
memory="4Gi",
env={"DEBIAN_FRONTEND": "noninteractive"},
)
# List all sandboxes
for info in client.list():
print(f"{info.id} {info.state} {info.name}")
# Get a handle to an existing sandbox by ID
sandbox = client.get(id="550e8400-e29b-41d4-a716-446655440000")
# Fetch info from server
print(sandbox.info.state)
sandbox.refresh() # force re-fetch
# Terminate
sandbox.terminate()
sandbox.terminate(grace_period_seconds=30)
Executing commands
Run and wait
result = sandbox.exec("ls", "-la", "/tmp")
for line in result.stdout:
print(line)
for line in result.stderr:
print(f"ERR: {line}")
print(f"exit code: {result.exit_code}")
# Or get the full text at once
print(result.stdout_text)
print(result.stderr_text)
Stream output in real time
for event in sandbox.exec_stream("make", "build", workdir="/app"):
if event.stdout:
print(event.stdout, end="")
if event.stderr:
print(event.stderr, end="", file=sys.stderr)
if event.is_exited:
print(f"\nDone: exit code {event.exit_code}")
Interactive processes (stdin + signals)
process = sandbox.exec_start("bash")
process.write_stdin("echo hello\n")
process.write_stdin("exit\n")
process.close_stdin()
for event in process.output():
if event.stdout:
print(event.stdout, end="")
Send signals to running processes:
import signal
process = sandbox.exec_start("sleep", "300")
process.send_signal(signal.SIGTERM)
result = process.wait()
Options
All exec methods accept the same keyword arguments:
result = sandbox.exec(
"python", "train.py",
workdir="/app", # working directory
timeout_secs=3600, # kill after 1 hour
env={"CUDA_VISIBLE_DEVICES": "0"}, # environment variables
)
Examples
Clone a GitHub repo into a sandbox
from sandbox import SandboxClient
client = SandboxClient("localhost:50051")
sandbox = client.create(image="ubuntu:latest", name="repo-sandbox")
# Install git
sandbox.exec("apt-get", "update")
sandbox.exec("apt-get", "install", "-y", "git")
# Clone
result = sandbox.exec(
"git", "clone", "https://github.com/chalk-ai/chalk.git", "/workspace/chalk"
)
if result.exit_code != 0:
print(f"Clone failed: {result.stderr_text}")
else:
# List what we got
result = sandbox.exec("ls", "-la", "/workspace/chalk")
for line in result.stdout:
print(line)
Spawn an OpenCode agent in a sandbox
OpenCode is a terminal-based AI coding agent. You can run it inside a sandbox to give it an isolated environment to work in.
from sandbox import SandboxClient
client = SandboxClient("localhost:50051")
sandbox = client.create(
image="ubuntu:latest",
name="opencode-agent",
cpu="2",
memory="4Gi",
env={
"ANTHROPIC_API_KEY": "sk-ant-...",
},
)
# Install dependencies
sandbox.exec("apt-get", "update")
sandbox.exec("apt-get", "install", "-y", "git", "curl", "build-essential")
# Install Go (opencode is a Go binary)
sandbox.exec("bash", "-c", "curl -fsSL https://go.dev/dl/go1.24.1.linux-amd64.tar.gz | tar -C /usr/local -xz")
sandbox.exec("bash", "-c", "echo 'export PATH=$PATH:/usr/local/go/bin:/root/go/bin' >> /root/.bashrc")
# Install opencode
sandbox.exec("bash", "-c", "export PATH=$PATH:/usr/local/go/bin:/root/go/bin && go install github.com/opencode-ai/opencode@latest")
# Clone a repo to work on
sandbox.exec("git", "clone", "https://github.com/your-org/your-repo.git", "/workspace/repo")
# Run opencode non-interactively with a prompt
result = sandbox.exec(
"bash", "-c",
"export PATH=$PATH:/usr/local/go/bin:/root/go/bin && cd /workspace/repo && opencode -p 'fix the failing tests in pkg/auth'",
timeout_secs=600,
)
print(result.stdout_text)
# Or run it interactively and feed it commands
process = sandbox.exec_start(
"bash", "-c",
"export PATH=$PATH:/usr/local/go/bin:/root/go/bin && cd /workspace/repo && opencode",
)
# Stream its output
for event in process.output():
if event.stdout:
print(event.stdout, end="")
if event.stderr:
print(event.stderr, end="", file=sys.stderr)
if event.is_exited:
break
Long-running build with real-time output
sandbox = client.create(image="node:22", name="build")
sandbox.exec("git", "clone", "https://github.com/your-org/frontend.git", "/app")
sandbox.exec("npm", "install", workdir="/app")
# Stream the build output as it happens
for event in sandbox.exec_stream("npm", "run", "build", workdir="/app"):
if event.stdout:
print(event.stdout, end="")
if event.stderr:
print(event.stderr, end="", file=sys.stderr)
if event.is_exited and event.exit_code != 0:
print(f"Build failed with exit code {event.exit_code}")
sandbox.terminate()
CLI tools
sandbox_exec.py - Run a command
python sandbox_exec.py --target localhost:50051 --sandbox-id <id> --exec "ls -la"
sandbox_stdout.py - Interactive shell
echo "echo hello" | python sandbox_stdout.py --target localhost:50051 --sandbox-id <id> --exec "bash"
Regenerating proto stubs
If the proto definition changes, regenerate the Python stubs:
pip install grpcio-tools
./generate.sh
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file chalkcompute-1.2.0.tar.gz.
File metadata
- Download URL: chalkcompute-1.2.0.tar.gz
- Upload date:
- Size: 98.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
36da6944e069439123a6b8ea591aa29e68931d18569a063ff0150bab9f37e2f5
|
|
| MD5 |
0fa178045b1ca7c3cb91c98ef8c51d39
|
|
| BLAKE2b-256 |
52c17cf455639c86174bca03097764a971a00773988ecce5a3e858b5b7f32f98
|
Provenance
The following attestation bundles were made for chalkcompute-1.2.0.tar.gz:
Publisher:
release.yml on chalk-ai/chalk-sandbox-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
chalkcompute-1.2.0.tar.gz -
Subject digest:
36da6944e069439123a6b8ea591aa29e68931d18569a063ff0150bab9f37e2f5 - Sigstore transparency entry: 1279847796
- Sigstore integration time:
-
Permalink:
chalk-ai/chalk-sandbox-sdk@853225dcb2ec282b2606afc1f69d784d8fee4399 -
Branch / Tag:
refs/tags/1.2.0 - Owner: https://github.com/chalk-ai
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@853225dcb2ec282b2606afc1f69d784d8fee4399 -
Trigger Event:
release
-
Statement type:
File details
Details for the file chalkcompute-1.2.0-py3-none-any.whl.
File metadata
- Download URL: chalkcompute-1.2.0-py3-none-any.whl
- Upload date:
- Size: 137.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a9d846c9722ffc41a9b8c4f79a2b96ae24d9ad55a57b9862f819e7f20bccbcb3
|
|
| MD5 |
c9082d9b50e217b5fe19883b1a4a23aa
|
|
| BLAKE2b-256 |
6f514cb54ae019a085cfe9938bac338b20f9c6180704140285ed5f4464d7525a
|
Provenance
The following attestation bundles were made for chalkcompute-1.2.0-py3-none-any.whl:
Publisher:
release.yml on chalk-ai/chalk-sandbox-sdk
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
chalkcompute-1.2.0-py3-none-any.whl -
Subject digest:
a9d846c9722ffc41a9b8c4f79a2b96ae24d9ad55a57b9862f819e7f20bccbcb3 - Sigstore transparency entry: 1279847797
- Sigstore integration time:
-
Permalink:
chalk-ai/chalk-sandbox-sdk@853225dcb2ec282b2606afc1f69d784d8fee4399 -
Branch / Tag:
refs/tags/1.2.0 - Owner: https://github.com/chalk-ai
-
Access:
private
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@853225dcb2ec282b2606afc1f69d784d8fee4399 -
Trigger Event:
release
-
Statement type: