Secure code execution in microVMs with QEMU
Project description
exec-sandbox
Secure code execution in isolated lightweight VMs (QEMU microVMs). Python library for running untrusted Python, JavaScript, and shell code with 9-layer security isolation.
Highlights
- Hardware isolation - Each execution runs in a dedicated lightweight VM (QEMU with KVM/HVF hardware acceleration), not containers
- Fast startup - 400ms fresh start, 1-2ms with pre-started VMs (warm pool)
- Simple API -
run()for one-shot execution,session()for stateful multi-step workflows with file I/O; plussbxCLI for quick testing - Streaming output - Real-time output as code runs
- Smart caching - Local + S3 remote cache for VM snapshots
- Network control - Disabled by default, optional domain allowlisting with defense-in-depth filtering (DNS + TLS SNI + DNS cross-validation to prevent spoofing)
- Memory optimization - Compressed memory (zram) + unused memory reclamation (balloon) for ~30% more capacity, ~80% smaller snapshots
Installation
uv add exec-sandbox # Core library
uv add "exec-sandbox[s3]" # + S3 snapshot caching
# Install QEMU runtime
brew install qemu # macOS
apt install qemu-system # Ubuntu/Debian
Quick Start
CLI
The sbx command provides quick access to sandbox execution from the terminal:
# Run Python code
sbx run 'print("Hello from sandbox")'
# Run JavaScript
sbx run -l javascript 'console.log("Hello from sandbox")'
# Run a file (language auto-detected from extension)
sbx run script.py
sbx run app.js
# From stdin
echo 'print(42)' | sbx run -
# With packages
sbx run -p requests -p pandas 'import pandas; print(pandas.__version__)'
# With timeout and memory limits
sbx run -t 60 -m 512 long_script.py
# Enable network with domain allowlist
sbx run --network --allow-domain api.example.com fetch_data.py
# Expose ports (guest:8080 → host:dynamic)
sbx run --expose 8080 --json 'print("ready")' | jq '.exposed_ports[0].url'
# Expose with explicit host port (guest:8080 → host:3000)
sbx run --expose 8080:3000 --json 'print("ready")' | jq '.exposed_ports[0].external'
# Start HTTP server with port forwarding (runs until timeout)
sbx run -t 60 --expose 8080 'import http.server; http.server.test(port=8080, bind="0.0.0.0")'
# JSON output for scripting
sbx run --json 'print("test")' | jq .exit_code
# Environment variables
sbx run -e API_KEY=secret -e DEBUG=1 script.py
# Multiple sources (run concurrently)
sbx run 'print(1)' 'print(2)' script.py
# Multiple inline codes
sbx run -c 'print(1)' -c 'print(2)'
CLI Options:
| Option | Short | Description | Default |
|---|---|---|---|
--language |
-l |
python, javascript, raw | auto-detect |
--code |
-c |
Inline code (repeatable, alternative to positional) | - |
--package |
-p |
Package to install (repeatable) | - |
--timeout |
-t |
Timeout in seconds | 30 |
--memory |
-m |
Memory in MB | 256 |
--env |
-e |
Environment variable KEY=VALUE (repeatable) | - |
--network |
Enable network access | false | |
--allow-domain |
Allowed domain (repeatable) | - | |
--expose |
Expose port INTERNAL[:EXTERNAL][/PROTOCOL] (repeatable) |
- | |
--json |
JSON output | false | |
--quiet |
-q |
Suppress progress output | false |
--no-validation |
Skip package allowlist validation | false | |
--upload |
Upload file LOCAL:GUEST (repeatable) |
- | |
--download |
Download file GUEST:LOCAL or GUEST (repeatable) |
- |
Python API
Basic Execution
from exec_sandbox import Scheduler
async with Scheduler() as scheduler:
result = await scheduler.run(
code="print('Hello, World!')",
language="python", # or "javascript", "raw"
)
print(result.stdout) # Hello, World!
print(result.exit_code) # 0
Sessions (Stateful Multi-Step)
Sessions keep a VM alive across multiple exec() calls — variables, imports, and state persist.
from exec_sandbox import Scheduler
async with Scheduler() as scheduler:
async with await scheduler.session(language="python") as session:
await session.exec("import math")
await session.exec("x = math.pi * 2")
result = await session.exec("print(f'{x:.4f}')")
print(result.stdout) # 6.2832
print(session.exec_count) # 3
Sessions support all three languages:
# JavaScript/TypeScript — variables and functions persist
async with await scheduler.session(language="javascript") as session:
await session.exec("const greet = (name: string): string => `Hello, ${name}!`")
result = await session.exec("console.log(greet('World'))")
# Shell (Bash) — env vars, cwd, and functions persist
async with await scheduler.session(language="raw") as session:
await session.exec("cd /tmp && export MY_VAR=hello")
result = await session.exec("echo $MY_VAR from $(pwd)")
Sessions auto-close after idle timeout (default: 300s, configurable via session_idle_timeout_seconds).
File I/O
Sessions support reading, writing, and listing files inside the sandbox.
from pathlib import Path
from exec_sandbox import Scheduler
async with Scheduler() as scheduler:
async with await scheduler.session(language="python") as session:
# Write a file into the sandbox
await session.write_file("input.csv", b"name,score\nAlice,95\nBob,87")
# Write from a local file
await session.write_file("model.pkl", Path("./local_model.pkl"))
# Execute code that reads input and writes output
await session.exec("data = open('input.csv').read().upper()")
await session.exec("open('output.csv', 'w').write(data)")
# Read a file back from the sandbox
await session.read_file("output.csv", destination=Path("./output.csv"))
# List files in a directory
files = await session.list_files("") # sandbox root
for f in files:
print(f"{f.name} {'dir' if f.is_dir else f'{f.size}B'}")
CLI file I/O uses sessions under the hood:
# Upload a local file, run code, download the result
sbx run --upload ./local.csv:input.csv --download output.csv:./result.csv \
-c "open('output.csv','w').write(open('input.csv').read().upper())"
# Download to ./output.csv (shorthand, no local path)
sbx run --download output.csv -c "open('output.csv','w').write('data')"
With Packages
First run installs and creates snapshot; subsequent runs restore in <400ms.
async with Scheduler() as scheduler:
result = await scheduler.run(
code="import pandas; print(pandas.__version__)",
language="python",
packages=["pandas==2.2.0", "numpy==1.26.0"],
)
print(result.stdout) # 2.2.0
Streaming Output
async with Scheduler() as scheduler:
result = await scheduler.run(
code="for i in range(5): print(i)",
language="python",
on_stdout=lambda chunk: print(f"[OUT] {chunk}", end=""),
on_stderr=lambda chunk: print(f"[ERR] {chunk}", end=""),
)
Network Access
async with Scheduler() as scheduler:
result = await scheduler.run(
code="import urllib.request; print(urllib.request.urlopen('https://httpbin.org/ip').read())",
language="python",
allow_network=True,
allowed_domains=["httpbin.org"], # Domain allowlist
)
Port Forwarding
Expose VM ports to the host for health checks, API testing, or service validation.
from exec_sandbox import Scheduler, PortMapping
async with Scheduler() as scheduler:
# Port forwarding without internet (isolated)
result = await scheduler.run(
code="print('server ready')",
expose_ports=[PortMapping(internal=8080, external=3000)], # Guest:8080 → Host:3000
allow_network=False, # No outbound internet
)
print(result.exposed_ports[0].url) # http://127.0.0.1:3000
# Dynamic port allocation (OS assigns external port)
result = await scheduler.run(
code="print('server ready')",
expose_ports=[8080], # external=None → OS assigns port
)
print(result.exposed_ports[0].external) # e.g., 52341
# Long-running server with port forwarding
result = await scheduler.run(
code="import http.server; http.server.test(port=8080, bind='0.0.0.0')",
expose_ports=[PortMapping(internal=8080)],
timeout_seconds=60, # Server runs until timeout
)
Security: Port forwarding works independently of internet access. When allow_network=False, guest VMs cannot initiate outbound connections (all outbound TCP/UDP blocked), but host-to-guest port forwarding still works.
Production Configuration
from exec_sandbox import Scheduler, SchedulerConfig
config = SchedulerConfig(
warm_pool_size=1, # Pre-started VMs per language (0 disables)
default_memory_mb=512, # Per-VM memory
default_timeout_seconds=60, # Execution timeout
s3_bucket="my-snapshots", # Remote cache for package snapshots
s3_region="us-east-1",
)
async with Scheduler(config) as scheduler:
result = await scheduler.run(...)
Error Handling
from exec_sandbox import Scheduler, VmTimeoutError, PackageNotAllowedError, SandboxError
async with Scheduler() as scheduler:
try:
result = await scheduler.run(code="while True: pass", language="python", timeout_seconds=5)
except VmTimeoutError:
print("Execution timed out")
except PackageNotAllowedError as e:
print(f"Package not in allowlist: {e}")
except SandboxError as e:
print(f"Sandbox error: {e}")
Asset Downloads
exec-sandbox requires VM images (kernel, initramfs, qcow2) and binaries (gvproxy-wrapper) to run. These assets are automatically downloaded from GitHub Releases on first use.
How it works
- On first
Schedulerinitialization, exec-sandbox checks if assets exist in the cache directory - If missing, it queries the GitHub Releases API for the matching version (
v{__version__}) - Assets are downloaded over HTTPS, verified against SHA256 checksums (provided by GitHub API), and decompressed
- Subsequent runs use the cached assets (no re-download)
Cache locations
| Platform | Location |
|---|---|
| macOS | ~/Library/Caches/exec-sandbox/ |
| Linux | ~/.cache/exec-sandbox/ (or $XDG_CACHE_HOME/exec-sandbox/) |
Environment variables
| Variable | Description |
|---|---|
EXEC_SANDBOX_CACHE_DIR |
Override cache directory |
EXEC_SANDBOX_OFFLINE |
Set to 1 to disable auto-download (fail if assets missing) |
EXEC_SANDBOX_ASSET_VERSION |
Force specific release version |
Pre-downloading for offline use
Use sbx prefetch to download all assets ahead of time:
sbx prefetch # Download all assets for current arch
sbx prefetch --arch aarch64 # Cross-arch prefetch
sbx prefetch -q # Quiet mode (CI/Docker)
Dockerfile example:
FROM ghcr.io/astral-sh/uv:python3.12-bookworm
RUN uv pip install --system exec-sandbox
RUN sbx prefetch -q
ENV EXEC_SANDBOX_OFFLINE=1
# Assets cached, no network needed at runtime
Security
Assets are verified against SHA256 checksums and built with provenance attestations.
Documentation
- QEMU Documentation - Virtual machine emulator
- KVM - Linux hardware virtualization
- HVF - macOS hardware virtualization (Hypervisor.framework)
- cgroups v2 - Linux resource limits
- seccomp - System call filtering
Configuration
| Parameter | Default | Description |
|---|---|---|
warm_pool_size |
0 | Pre-started VMs per language (Python, JavaScript, Raw). Set >0 to enable |
default_memory_mb |
256 | VM memory (128 MB minimum, no upper bound). Effective ~25% higher with memory compression (zram) |
default_timeout_seconds |
30 | Execution timeout (1-300s) |
session_idle_timeout_seconds |
300 | Session idle timeout (10-3600s). Auto-closes inactive sessions |
images_dir |
auto | VM images directory |
snapshot_cache_dir |
/tmp/exec-sandbox-cache | Local snapshot cache |
s3_bucket |
None | S3 bucket for remote snapshot cache |
s3_region |
us-east-1 | AWS region |
s3_prefix |
snapshots/ | Prefix for S3 keys |
max_concurrent_s3_uploads |
4 | Max concurrent background S3 uploads (1-16) |
memory_overcommit_ratio |
1.5 | Memory overcommit ratio. Budget = host_total × (1 - reserve) × ratio |
cpu_overcommit_ratio |
4.0 | CPU overcommit ratio. Budget = host_cpus × ratio |
host_memory_reserve_ratio |
0.1 | Fraction of host memory reserved for OS (e.g., 0.1 = 10%) |
resource_monitor_interval_seconds |
5.0 | Interval between resource monitor ticks (1-60s) |
enable_package_validation |
True | Validate against top 10k packages (PyPI for Python, npm for JavaScript) |
auto_download_assets |
True | Auto-download VM images from GitHub Releases |
Environment variables: EXEC_SANDBOX_IMAGES_DIR, EXEC_SANDBOX_CACHE_DIR, EXEC_SANDBOX_OFFLINE, etc.
Memory Optimization
VMs include automatic memory optimization (no configuration required):
- Compressed swap (zram) - ~25% more usable memory via lz4 compression
- Memory reclamation (virtio-balloon) - 70-90% smaller snapshots
Memory Architecture
Guest RAM is a fixed budget shared between the kernel, userspace processes, and tmpfs mounts. tmpfs is demand-allocated — writing 10 MB of files consumes ~10 MB of the VM's memory budget.
Guest RAM (default 256 MB)
├── Kernel + slab caches (~20 MB fixed)
├── Userspace (code execution) (variable)
├── tmpfs mounts (on demand)
│ ├── /home/user 50% of RAM (no fixed cap) — user files, packages
│ ├── /tmp 128 MB cap — pip/uv wheel builds, temp files
│ └── /dev/shm 64 MB cap — POSIX shared memory
└── zram compressed swap (~25% effective bonus)
| Mount | Size | Purpose |
|---|---|---|
/home/user |
50% of RAM | Writable home dir — installed packages, user scripts, data files |
/tmp |
128 MB | Scratch space for package managers (wheel builds), temp files |
/dev/shm |
64 MB | POSIX shared memory segments (Python multiprocessing semaphores) |
Execution Result
| Field | Type | Description |
|---|---|---|
stdout |
str | Captured output (max 1MB) |
stderr |
str | Captured errors (max 100KB) |
exit_code |
int | Process exit code (0 = success, 128+N = killed by signal N) |
execution_time_ms |
int | Duration reported by VM |
external_cpu_time_ms |
int | CPU time measured by host |
external_memory_peak_mb |
int | Peak memory measured by host |
timing.setup_ms |
int | Resource setup (filesystem, limits, network) |
timing.boot_ms |
int | VM boot time |
timing.execute_ms |
int | Code execution |
timing.total_ms |
int | End-to-end time |
warm_pool_hit |
bool | Whether a pre-started VM was used |
exposed_ports |
list | Port mappings with .internal, .external, .host, .url |
Exit codes follow Unix conventions: 0 = success, >128 = killed by signal N where N = exit_code - 128 (e.g., 137 = SIGKILL, 139 = SIGSEGV), -1 = internal error (could not retrieve status), other non-zero = program error.
result = await scheduler.run(code="...", language="python")
if result.exit_code == 0:
pass # Success
elif result.exit_code > 128:
signal_num = result.exit_code - 128 # e.g., 9 for SIGKILL
elif result.exit_code == -1:
pass # Internal error (see result.stderr)
else:
pass # Program exited with error
FileInfo
Returned by Session.list_files().
| Field | Type | Description |
|---|---|---|
name |
str | File or directory name |
is_dir |
bool | True if entry is a directory |
size |
int | File size in bytes (0 for directories) |
Exceptions
| Exception | Description |
|---|---|
SandboxError |
Base exception for all sandbox errors |
TransientError |
Retryable errors — may succeed on retry |
PermanentError |
Non-retryable errors |
VmTimeoutError |
VM boot timed out |
VmCapacityError |
VM pool at capacity |
VmConfigError |
Invalid VM configuration |
SessionClosedError |
Session already closed |
CommunicationError |
Guest communication failed |
GuestAgentError |
Guest agent returned error |
PackageNotAllowedError |
Package not in allowlist |
SnapshotError |
Snapshot operation failed |
EnvVarValidationError |
Environment variable validation failed |
SocketAuthError |
Socket peer authentication failed |
SandboxDependencyError |
Optional dependency missing (e.g., aioboto3) |
AssetError |
Asset download/verification failed |
Session Resilience
Sessions survive user code failures. Only VM-level communication errors close a session.
| Failure | Exit Code | Session | State | Next exec() |
|---|---|---|---|---|
| Exception (ValueError, etc.) | 1 | Alive | Preserved | Works, state intact |
sys.exit(n) |
n | Alive | Preserved | Works, state intact |
| Syntax error | 1 | Alive | Preserved | Works, state intact |
os._exit(n) |
n | Alive | Reset | Works, fresh REPL |
| Signal (SIGKILL, OOM kill) | 128 + signal | Alive | Reset | Works, fresh REPL |
| Timeout | -1 | Alive | Reset | Works, fresh REPL |
| VM communication failure | N/A | Closed | Lost | SessionClosedError |
Pitfalls
# run() creates a fresh VM each time - state doesn't persist across calls
result1 = await scheduler.run("x = 42", language="python")
result2 = await scheduler.run("print(x)", language="python") # NameError!
# Fix: use sessions for multi-step stateful execution
async with await scheduler.session(language="python") as session:
await session.exec("x = 42")
result = await session.exec("print(x)") # Works! x persists
# Pre-started VMs (warm pool) only work without packages
config = SchedulerConfig(warm_pool_size=1)
await scheduler.run(code="...", packages=["pandas==2.2.0"]) # Bypasses warm pool, fresh start (400ms)
await scheduler.run(code="...") # Uses warm pool (1-2ms)
# Version specifiers are required (security + caching)
packages=["pandas==2.2.0"] # Valid, cacheable
packages=["pandas"] # PackageNotAllowedError! Must pin version
# Streaming callbacks must be fast (blocks async execution)
on_stdout=lambda chunk: time.sleep(1) # Blocks!
on_stdout=lambda chunk: buffer.append(chunk) # Fast
# Memory overhead: pre-started VMs use warm_pool_size × 3 languages × 192MB
# warm_pool_size=5 → 5 VMs/lang × 3 × 192MB = 2.88GB for warm pool alone
# Memory can exceed configured limit due to compressed swap
default_memory_mb=256 # Code can actually use ~280-320MB thanks to compression
# Don't rely on memory limits for security - use timeouts for runaway allocations
# Network without domain restrictions is risky
allow_network=True # Full internet access
allow_network=True, allowed_domains=["api.example.com"] # Controlled
# Port forwarding binds to localhost only
expose_ports=[8080] # Binds to 127.0.0.1, not 0.0.0.0
# If you need external access, use a reverse proxy on the host
# multiprocessing.Pool works, but single vCPU means no CPU-bound speedup
from multiprocessing import Pool
Pool(2).map(lambda x: x**2, [1, 2, 3]) # Works (cloudpickle handles lambda serialization)
# For CPU-bound parallelism, use multiple VMs via scheduler.run() concurrently instead
# Background processes survive across session exec() calls — state accumulates
async with await scheduler.session(language="python") as session:
await session.exec("import subprocess; subprocess.Popen(['sleep', '300'])")
await session.exec("import subprocess; subprocess.Popen(['sleep', '300'])")
# Both sleep processes are still running! VM process limit (RLIMIT_NPROC=1024) prevents unbounded growth
# All processes are cleaned up when session.close() destroys the VM
Limits
| Resource | Limit |
|---|---|
| Max code size | 1MB |
| Max stdout | 1MB |
| Max stderr | 100KB |
| Max packages | 50 |
| Max env vars | 100 |
| Max exposed ports | 10 |
| Max file size (I/O) | 500MB |
| Max file path length | 4096 bytes (255 per component) |
| Execution timeout | 1-300s |
| VM memory | 128MB minimum (no upper bound) |
| Max concurrent VMs | Resource-aware (auto-computed from host memory + CPU) |
Security Architecture
| Layer | Technology | Protection |
|---|---|---|
| 1 | Hardware virtualization (KVM/HVF) | CPU isolation enforced by hardware |
| 2 | Custom hardened kernel | Modules disabled at compile time, io_uring compiled out, slab/memory hardening, ~300 subsystems removed |
| 3 | Unprivileged QEMU | No root privileges, minimal exposure |
| 4 | Non-root REPL (UID 1000) | Blocks mount, ptrace, raw sockets, kernel modules |
| 5 | System call filtering (seccomp) | Blocks unauthorized OS calls |
| 6 | Resource limits (cgroups v2) | Memory, CPU, process limits |
| 7 | Process isolation (namespaces) | Separate process, network, filesystem views |
| 8 | Security policies (AppArmor/SELinux) | When available |
| 9 | Socket authentication (SO_PEERCRED/LOCAL_PEERCRED) | Verifies QEMU process identity |
Guarantees:
- Fresh VM per
run(), destroyed immediately after. Sessions reuse the same VM acrossexec()calls (same isolation, persistent state) - Network disabled by default - requires explicit
allow_network=True - Domain allowlisting with 3-layer outbound filtering — DNS resolution blocked for non-allowed domains, TLS SNI inspection on port 443, and DNS cross-validation to prevent SNI spoofing
- Package validation - only top 10k Python/JavaScript packages allowed by default
- Port forwarding isolation - when
expose_portsis used withoutallow_network, guest cannot initiate any outbound connections (all outbound TCP/UDP blocked)
Requirements
| Requirement | Supported |
|---|---|
| Python | 3.12, 3.13, 3.14 (including free-threaded) |
| Linux | x64, arm64 |
| macOS | x64, arm64 |
| QEMU | 8.0+ |
| Hardware acceleration | KVM (Linux) or HVF (macOS) recommended, 10-50x faster |
Verify hardware acceleration is available:
ls /dev/kvm # Linux
sysctl kern.hv_support # macOS
Without hardware acceleration, QEMU uses software emulation (TCG), which is 10-50x slower.
Linux Setup (Optional Security Hardening)
For enhanced security on Linux, exec-sandbox can run QEMU as an unprivileged qemu-vm user. This isolates the VM process from your user account.
# Create qemu-vm system user
sudo useradd --system --no-create-home --shell /usr/sbin/nologin qemu-vm
# Add qemu-vm to kvm group (for hardware acceleration)
sudo usermod -aG kvm qemu-vm
# Add your user to qemu-vm group (for socket access)
sudo usermod -aG qemu-vm $USER
# Re-login or activate group membership
newgrp qemu-vm
Why is this needed? When qemu-vm user exists, exec-sandbox runs QEMU as that user for process isolation. The host needs to connect to QEMU's Unix sockets (0660 permissions), which requires group membership. This follows the libvirt security model.
If qemu-vm user doesn't exist, exec-sandbox runs QEMU as your user (no additional setup required, but less isolated).
VM Images
Pre-built images from GitHub Releases:
| Image | Runtime | Package Manager | Size | Description |
|---|---|---|---|---|
python-3.14-base |
Python 3.14 | uv | ~140MB | Full Python environment with C extension support |
node-1.3-base |
Bun 1.3 | bun | ~57MB | Fast JavaScript/TypeScript runtime with Node.js compatibility |
raw-base |
Bash | None | ~15MB | Shell scripts and custom runtimes |
All images are based on Alpine Linux 3.23 (Linux 6.18 LTS, musl libc) and include common tools for AI agent workflows.
Common Tools (all images)
| Tool | Purpose |
|---|---|
git |
Version control, clone repositories |
curl |
HTTP requests, download files |
jq |
JSON processing |
bash |
Shell scripting |
coreutils |
Standard Unix utilities (ls, cp, mv, etc.) |
tar, gzip, unzip |
Archive extraction |
file |
File type detection |
Python Image
| Component | Version | Notes |
|---|---|---|
| Python | 3.14 | python-build-standalone (musl) |
| uv | 0.9+ | 10-100x faster than pip (docs) |
| gcc, musl-dev | Alpine | For C extensions (numpy, pandas, etc.) |
| cloudpickle | 3.1 | Serialization for multiprocessing in REPL (docs) |
Usage notes:
- Use
uv pip installinstead ofpip install(pip not included) - Python 3.14 includes t-strings, deferred annotations, free-threading support
multiprocessing.Poolworks out of the box — cloudpickle handles serialization of REPL-defined functions, lambdas, and closures. Single vCPU means no CPU-bound speedup, but I/O-bound parallelism andPool-based APIs work correctly
JavaScript Image
| Component | Version | Notes |
|---|---|---|
| Bun | 1.3 | Runtime, bundler, package manager (docs) |
Usage notes:
- Bun is a Node.js-compatible runtime (not Node.js itself)
- Built-in TypeScript/JSX support, no transpilation needed
- Use
bun installfor packages,bun runfor scripts - Near-complete Node.js API compatibility
Raw Image
Minimal Alpine Linux with common tools only. Use for:
- Shell script execution (
language="raw") — runs under GNU Bash, full bash syntax supported - Custom runtime installation
- Lightweight workloads
Build from source:
./scripts/build-images.sh
# Output: ./images/dist/python-3.14-base.qcow2, ./images/dist/node-1.3-base.qcow2, ./images/dist/raw-base.qcow2
Security
- Security Policy - Vulnerability reporting
- Dependency list (SBOM) - Full list of included software, attached to releases
Contributing
Contributions welcome! Please open an issue first to discuss changes.
make install # Setup environment
make test # Run tests
make lint # Format and lint
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file exec_sandbox-0.15.1.tar.gz.
File metadata
- Download URL: exec_sandbox-0.15.1.tar.gz
- Upload date:
- Size: 868.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
bb204d65ce5df9a193048aa7296dceea7d5ee1d88e44c920a553af841acf208a
|
|
| MD5 |
dd0b6d95ead48dc93961053a4c9986cb
|
|
| BLAKE2b-256 |
1bd876bf1fd89233ab4f3e62ffa54822a9e86f17131a94634927e81c702f6fe4
|
Provenance
The following attestation bundles were made for exec_sandbox-0.15.1.tar.gz:
Publisher:
release.yml on dualeai/exec-sandbox
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
exec_sandbox-0.15.1.tar.gz -
Subject digest:
bb204d65ce5df9a193048aa7296dceea7d5ee1d88e44c920a553af841acf208a - Sigstore transparency entry: 989287705
- Sigstore integration time:
-
Permalink:
dualeai/exec-sandbox@3890b880f1f2a7c1672daaf99e540cab54448e25 -
Branch / Tag:
refs/tags/v0.15.1 - Owner: https://github.com/dualeai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3890b880f1f2a7c1672daaf99e540cab54448e25 -
Trigger Event:
release
-
Statement type:
File details
Details for the file exec_sandbox-0.15.1-py3-none-any.whl.
File metadata
- Download URL: exec_sandbox-0.15.1-py3-none-any.whl
- Upload date:
- Size: 334.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fa640d2ae75cd9eb36e1b35575e17c4eeb5db1cb87505172f6c106980db3da6c
|
|
| MD5 |
b7ebab224814a5070fdeb564a1675701
|
|
| BLAKE2b-256 |
8a28c3fd1176fe43fd71514d4d04c99351244c0c9fa889128c2c560e5c73faa1
|
Provenance
The following attestation bundles were made for exec_sandbox-0.15.1-py3-none-any.whl:
Publisher:
release.yml on dualeai/exec-sandbox
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
exec_sandbox-0.15.1-py3-none-any.whl -
Subject digest:
fa640d2ae75cd9eb36e1b35575e17c4eeb5db1cb87505172f6c106980db3da6c - Sigstore transparency entry: 989287744
- Sigstore integration time:
-
Permalink:
dualeai/exec-sandbox@3890b880f1f2a7c1672daaf99e540cab54448e25 -
Branch / Tag:
refs/tags/v0.15.1 - Owner: https://github.com/dualeai
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@3890b880f1f2a7c1672daaf99e540cab54448e25 -
Trigger Event:
release
-
Statement type: