Skip to main content

Kubernetes sandbox provider for the DeepAgents framework

Project description

langchain-kubernetes

Kubernetes sandbox provider for DeepAgents. Run AI agent code in isolated, stateful Kubernetes sandboxes.

Two backend modes are supported — pick the one that fits your cluster:

Mode When to use Install
agent-sandbox (default) Full-featured: warm pools, gVisor/Kata, sub-second startup. Requires the kubernetes-sigs/agent-sandbox controller. pip install langchain-kubernetes[agent-sandbox]
raw Works on any cluster with no extra controllers or CRDs. Direct Pod management. pip install langchain-kubernetes[raw]

Installation

# agent-sandbox mode (recommended when you can install the controller)
pip install langchain-kubernetes[agent-sandbox]

# raw mode (any cluster, no controller required)
pip install langchain-kubernetes[raw]

# both modes
pip install langchain-kubernetes[all]

agent-sandbox Mode

Prerequisites

This mode does not install or manage the agent-sandbox controller. The following must already be deployed in your cluster:

  1. agent-sandbox controller + CRDs — manages Sandbox, SandboxTemplate, SandboxClaim, and SandboxWarmPool resources.
  2. sandbox-router — HTTP gateway that routes traffic from the SDK to sandbox Pods.
  3. A SandboxTemplate CR — defines the sandbox blueprint (image, resources, runtime class, security).

Install the controller:

export VERSION="v0.1.0"
kubectl apply -f https://github.com/kubernetes-sigs/agent-sandbox/releases/download/${VERSION}/manifest.yaml
kubectl apply -f https://github.com/kubernetes-sigs/agent-sandbox/releases/download/${VERSION}/extensions.yaml
kubectl apply -f https://github.com/kubernetes-sigs/agent-sandbox/releases/download/${VERSION}/sandbox-router.yaml

Full guide: https://agent-sandbox.sigs.k8s.io/docs/getting_started/

Create a SandboxTemplate:

kubectl apply -f examples/k8s/sandbox-template.yaml

Example template (see examples/k8s/sandbox-template.yaml):

apiVersion: extensions.agents.x-k8s.io/v1alpha1
kind: SandboxTemplate
metadata:
  name: python-sandbox-template
  namespace: default
spec:
  podTemplate:
    spec:
      runtimeClassName: gvisor
      containers:
        - name: sandbox
          image: python:3.12-slim
          ports:
            - containerPort: 8888
          resources:
            requests:
              cpu: 250m
              memory: 512Mi

Quick Start

from langchain_kubernetes import KubernetesProvider, KubernetesProviderConfig

provider = KubernetesProvider(
    KubernetesProviderConfig(
        template_name="python-sandbox-template",
    )
)

sandbox = provider.get_or_create()
try:
    result = sandbox.execute("python3 -c 'print(2 + 2)'")
    print(result.output)     # "4\n"
    print(result.exit_code)  # 0
finally:
    provider.delete(sandbox_id=sandbox.id)

Configuration

KubernetesProviderConfig(
    # Required: SandboxTemplate CR name (must exist in the cluster)
    template_name="python-sandbox-template",

    # Kubernetes namespace where sandboxes are created
    namespace="default",

    # How to connect to the sandbox-router:
    #   "tunnel"  — auto port-forward via kubectl (default, good for local dev)
    #   "gateway" — route through a Kubernetes Gateway resource
    #   "direct"  — connect to an explicit URL (for in-cluster or custom domains)
    connection_mode="tunnel",

    # For gateway mode
    gateway_name=None,
    gateway_namespace="default",

    # For direct mode
    api_url=None,

    # Port the sandbox runtime listens on
    server_port=8888,

    # Seconds to wait for sandbox to become ready
    startup_timeout_seconds=120,

    # Default per-execute() timeout in seconds
    default_exec_timeout=1800,
)

Connection modes:

Mode When to use Required field
tunnel (default) Local dev, kubectl available
gateway Production with a Kubernetes Gateway resource gateway_name
direct In-cluster agents or custom sandbox-router URL api_url

Optional: Warm Pools

Pre-warm a pool of sandbox Pods to eliminate cold-start latency:

kubectl apply -f examples/k8s/warm-pool.yaml
# examples/k8s/warm-pool.yaml
apiVersion: extensions.agents.x-k8s.io/v1alpha1
kind: SandboxWarmPool
metadata:
  name: python-warm-pool
  namespace: default
spec:
  templateRef:
    name: python-sandbox-template
  size: 3  # keep 3 Pods warm at all times

Raw Mode

Use this when you cannot install the agent-sandbox controller — locked-down OpenShift clusters, environments where CRD installation requires lengthy approval processes, or air-gapped clusters without access to controller images.

Raw mode directly creates and manages ephemeral Pods via the Kubernetes API. No CRDs, no controllers, no sandbox-router. All work happens through the Kubernetes exec API.

Tradeoffs vs agent-sandbox mode:

agent-sandbox raw
Controller required Yes No
CRDs required Yes No
Warm pools Yes No
gVisor / Kata Yes (via SandboxTemplate) Depends on cluster
Startup time Sub-second (warm) ~5–30s (Pod scheduling)
Pod-level config In SandboxTemplate CRD In KubernetesProviderConfig

RBAC

The process running this package needs a ServiceAccount / kubeconfig with:

rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec", "pods/log", "namespaces"]
  verbs: ["get", "list", "create", "delete", "watch"]
- apiGroups: ["networking.k8s.io"]
  resources: ["networkpolicies"]
  verbs: ["get", "create", "delete"]
- apiGroups: [""]
  resources: ["resourcequotas"]
  verbs: ["get", "create", "delete"]

The sandbox Pod's own ServiceAccount has no RBAC bindings (automountServiceAccountToken: false).

Quick Start

from langchain_kubernetes import KubernetesProvider, KubernetesProviderConfig

provider = KubernetesProvider(
    KubernetesProviderConfig(
        mode="raw",
        namespace="default",
        image="python:3.12-slim",
    )
)

sandbox = provider.get_or_create()
try:
    result = sandbox.execute("python3 -c 'print(2 + 2)'")
    print(result.output)     # "4\n"
    print(result.exit_code)  # 0
finally:
    provider.delete(sandbox_id=sandbox.id)

Configuration

KubernetesProviderConfig(
    mode="raw",

    # Kubernetes namespace where Pods are created
    namespace="default",

    # Container image
    image="python:3.12-slim",
    image_pull_policy="IfNotPresent",
    image_pull_secrets=[],          # list of Secret names

    # Working directory inside the container
    workdir="/workspace",

    # Pod entrypoint (default: sleep infinity — all work via exec)
    command=["sleep", "infinity"],

    # Environment variables
    env={"MY_VAR": "value"},

    # Resource requests and limits
    cpu_request="100m",
    cpu_limit="1000m",
    memory_request="256Mi",
    memory_limit="1Gi",
    ephemeral_storage_limit="5Gi",

    # Network isolation: deny-all NetworkPolicy (strongly recommended)
    block_network=True,

    # Security context
    run_as_user=1000,
    run_as_group=1000,
    seccomp_profile="RuntimeDefault",  # or "Localhost"

    # Per-sandbox namespace (stronger isolation, slower, more RBAC)
    namespace_per_sandbox=False,

    # ServiceAccount for the sandbox Pod (default: none)
    service_account=None,

    # Scheduling
    node_selector={},
    tolerations=[],

    # Extra volumes / mounts
    volumes=[],
    volume_mounts=[],
    init_containers=[],

    # Low-level Pod spec overrides (deep-merged into spec)
    pod_template_overrides=None,

    # Pod annotations
    extra_annotations={},

    # Shell script run as first exec after creation
    setup_script=None,

    # Timeouts
    startup_timeout_seconds=120,
    default_exec_timeout=1800,
)

Security defaults

Raw mode Pods always enforce:

automountServiceAccountToken: false
securityContext:
  runAsNonRoot: true
  runAsUser: 1000       # configurable
  runAsGroup: 1000      # configurable
containers:
  - securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: RuntimeDefault   # configurable

Usage with DeepAgents

Works the same regardless of mode:

from langchain_anthropic import ChatAnthropic
from deepagents import create_agent
from langchain_kubernetes import KubernetesProvider, KubernetesProviderConfig

# agent-sandbox mode
provider = KubernetesProvider(
    KubernetesProviderConfig(template_name="python-sandbox-template")
)

# or raw mode
provider = KubernetesProvider(
    KubernetesProviderConfig(mode="raw", image="python:3.12-slim")
)

sandbox = provider.get_or_create()
llm = ChatAnthropic(model="claude-opus-4-5")
agent = create_agent(llm, backend=sandbox)

result = agent.invoke({
    "messages": [("user", "Write and run a Python script that prints the Fibonacci sequence")]
})
print(result)

provider.delete(sandbox_id=sandbox.id)

Usage with CLI

# agent-sandbox mode
deepagents --sandbox kubernetes --template-name python-sandbox-template

# gateway mode
deepagents --sandbox kubernetes \
  --template-name python-sandbox-template \
  --connection-mode gateway \
  --gateway-name my-gateway

# raw mode
deepagents --sandbox kubernetes --mode raw

Sandbox Lifecycle

# Create
sandbox = provider.get_or_create()
print(sandbox.id)  # e.g. "python-sandbox-template-a1b2c3d4" or "a1b2c3d4"

# Reconnect to an existing sandbox (within the same provider instance)
sandbox = provider.get_or_create(sandbox_id="a1b2c3d4")

# List active sandboxes (current provider instance only)
sandboxes = provider.list()

# Delete (idempotent)
provider.delete(sandbox_id=sandbox.id)

Execute Commands

result = sandbox.execute("echo hello")
print(result.output)     # "hello\n"
print(result.exit_code)  # 0
print(result.truncated)  # False

# Per-call timeout
result = sandbox.execute("sleep 60", timeout=5)

File Operations

All BaseSandbox filesystem helpers work via execute() and are inherited automatically:

# Write / read files
sandbox.write("/tmp/script.py", "print('hello')\n")
content = sandbox.read("/tmp/script.py")

# Edit (string replacement)
sandbox.edit("/tmp/script.py", "hello", "world")

# List directory
entries = sandbox.ls_info("/tmp")

# Glob
matches = sandbox.glob_info("**/*.py", path="/app")

# Grep
hits = sandbox.grep_raw("def main", path="/app")

# Batch upload (bytes)
sandbox.upload_files([
    ("/app/data.csv", b"col1,col2\n1,2\n"),
    ("/app/config.json", b'{"key": "val"}'),
])

# Batch download (bytes)
responses = sandbox.download_files(["/app/output.txt"])
print(responses[0].content)

Async Usage

import asyncio
from langchain_kubernetes import KubernetesProvider, KubernetesProviderConfig

async def main():
    provider = KubernetesProvider(
        KubernetesProviderConfig(template_name="python-sandbox-template")
    )
    sandbox = await provider.aget_or_create()
    try:
        result = await sandbox.aexecute("echo async")
        print(result.output)
    finally:
        await provider.adelete(sandbox_id=sandbox.id)

asyncio.run(main())

Troubleshooting

ImportError: ... requires the 'k8s-agent-sandbox' package

pip install langchain-kubernetes[agent-sandbox]

ImportError: ... requires the 'kubernetes' package

pip install langchain-kubernetes[raw]

SandboxTemplate 'my-template' not found in namespace 'default'

Create the template first:

kubectl apply -f examples/k8s/sandbox-template.yaml
# or list existing templates:
kubectl get sandboxtemplates

Cannot reach the sandbox-router (agent-sandbox mode)

Tunnel mode: Ensure kubectl is in $PATH and the sandbox-router Service exists:

kubectl get svc -l app=sandbox-router

Gateway mode: Verify the Gateway resource:

kubectl get gateway my-gateway

Direct mode: Verify api_url is reachable from your client.

Sandbox startup timeout (agent-sandbox mode)

kubectl logs -n agent-sandbox-system -l app=agent-sandbox-controller
kubectl get sandboxes -n default
kubectl describe sandbox <sandbox-name>

Increase startup_timeout_seconds if the cluster is slow.

Pod not reaching Running phase (raw mode)

kubectl get pods -n default -l app.kubernetes.io/managed-by=deepagents
kubectl describe pod deepagents-<sandbox-id> -n default
kubectl get events -n default --sort-by='.lastTimestamp'

Common causes: image pull failures, insufficient resources, PodSecurityPolicy/OPA admission rejections.

Controller or CRDs not installed (agent-sandbox mode)

kubectl get crds | grep agents.x-k8s.io
kubectl get pods -n agent-sandbox-system

Run the installation commands from the Prerequisites section above, or switch to mode="raw".


Development

# Install with dev dependencies (both modes)
uv venv .venv
uv pip install -e ".[all,dev]"

# Run unit tests (no cluster required)
.venv/bin/python -m pytest tests/unit/

# Run agent-sandbox integration tests (requires cluster with controller)
K8S_INTEGRATION=1 SANDBOX_TEMPLATE=python-sandbox-template \
  .venv/bin/python -m pytest tests/integration/ -m agent_sandbox

# Run raw mode integration tests (requires any plain Kubernetes cluster)
.venv/bin/python -m pytest tests/integration/ -m raw_k8s

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_kubernetes-0.2.0.tar.gz (25.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_kubernetes-0.2.0-py3-none-any.whl (28.1 kB view details)

Uploaded Python 3

File details

Details for the file langchain_kubernetes-0.2.0.tar.gz.

File metadata

  • Download URL: langchain_kubernetes-0.2.0.tar.gz
  • Upload date:
  • Size: 25.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for langchain_kubernetes-0.2.0.tar.gz
Algorithm Hash digest
SHA256 9ed9addefac3c3487297de81bf1c1f7307fabaf52114b833b2fd298b4c4fcd48
MD5 e0c2a15c533d29a9ff695771f31cc0a9
BLAKE2b-256 cf5ea10d068c7f0d8d10a485a06eebf077c8ff5a6eb32f5e4d5a577b21896d7f

See more details on using hashes here.

File details

Details for the file langchain_kubernetes-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_kubernetes-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8152ed22ad694751aea30e98fb50b9c65830c6d6c2378a340f6159a58b530e6a
MD5 a21edfece1da7789a683585f837d4825
BLAKE2b-256 076eeb4cd2c722d39bd12b21f4f8a7fe541357c9e506c56d8b1a3eb7de792033

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page