Container management library with backend abstraction for sandboxed execution
Project description
Podkit - Simple Container Management Library
A Python library for sandboxed execution in Docker containers with backend abstraction
Features
Podkit implements a clean three-layer architecture for flexible container management:
Layer 1 (Backend) provides runtime-agnostic infrastructure operations for Docker/Kubernetes with image management and workload execution;
Layer 2 (ContainerManager) bridges infrastructure and application logic with container lifecycle management, project-specific mounting strategies, and host-to-container path translation;
Layer 3 (SessionManager) delivers the user-facing API with session lifecycle tracking, automatic activity monitoring, and cleanup of expired sessions.
This separation enables backend portability (swap Docker for Podman or Kubernetes without touching business logic), customizable project configurations (different mounting strategies per project), and independent testing of each layer.
Example 1 (the simplest one)
# Auto-creates session OR reconnects to the existing running/exited container (auto-stopping after 1 min)
from podkit import get_docker_session
result = get_docker_session(user_id="bob", session_id="123").execute_command("pwd")
print(result.stdout)
# No auto-removing in this case, only auto-stopping!
# You may not close the session if you expect running some commands in this session again.
# Otherwise close the session manually, like in example 3.
Example 2 (simple with auto-cleanup)
# auto-cleanup with context manager (container will be removed, slower than example 1)
from podkit import get_docker_session
with get_docker_session(user_id="bob", session_id="123") as session:
result = session.execute_command("pwd")
print(result.stdout)
# Perfect when you need one-time execution - run the command and clean up resources right away
Example 3 (with port exposure)
# Expose ports from container to host
from pathlib import Path
from podkit.core.models import ContainerConfig
from podkit import get_docker_session
# Create config with exposed ports
config = ContainerConfig(
image="nginx:latest",
ports=[80, 443] # Expose nginx on host ports 80 and 443
)
session = get_docker_session(
user_id="bob",
session_id="web-server",
config=config
)
# nginx now accessible at http://localhost:80 and https://localhost:443
result = session.execute_command(["nginx", "-g", "daemon off;"])
session.close()
Example 4 (multiple mounts with read-only support)
# Multiple volume mounts: persistent workspace + read-only shared data
from pathlib import Path
from podkit import get_docker_session
from podkit.core.models import ContainerConfig, Mount
# Configure container with multiple mounts
config = ContainerConfig(
image="python:3.11-alpine",
volumes=[
# Read-only mount for shared datasets (prevents accidental modifications)
Mount(type="bind", source=Path("/shared/datasets"), target=Path("/data"), read_only=True),
# Read-only mount for shared configuration files
Mount(type="bind", source=Path("/shared/configs"), target=Path("/etc/configs"), read_only=True),
]
)
session = get_docker_session(
user_id="bob",
session_id="123",
workspace="/app/data/workspace",
workspace_host="./data/workspace", # Workspace remains read-write for outputs
config=config
)
# Read from shared data (read-only, safe from accidental changes)
result = session.execute_command(["cat", "/data/dataset.csv"])
print(result.stdout)
# Write to workspace (read-write, files persist on host)
session.write_file(Path("/workspace/results.txt"), "processing complete")
session.close()
Example 5 (production-ready configuration)
# Comprehensive example: combines networks, resource limits, read-only mounts, and environment
from pathlib import Path
from podkit.core.models import ContainerConfig, Mount
from podkit import get_docker_session
# Configure container with multiple production features
config = ContainerConfig(
image="python:3.11-alpine",
# Resource limits
cpu_limit=2.0, # 2 CPU cores max
memory_limit="1g", # 1 GB RAM max
# Network configuration (e.g., for service-to-service communication)
networks=["execution-network"],
# Read-only mounts for shared data and credentials
volumes=[
Mount(type="bind", source=Path("/shared/datasets"), target=Path("/data"), read_only=True),
Mount(type="bind", source=Path("/shared/credentials"), target=Path("/creds"), read_only=True),
],
# Environment variables
environment={
"ENV": "production",
"LOG_LEVEL": "INFO",
"DATABASE_URL": "postgresql://db.execution-network:5432/tasks",
},
# Custom startup command (optional - defaults to sleep with timeout)
command=["tail", "-f", "/dev/null"], # Keep container alive indefinitely
)
session = get_docker_session(
user_id="bob",
session_id="task-123",
workspace="/app/workspaces",
workspace_host="./workspaces",
config=config
)
# Secure, resource-controlled execution with access to shared data
result = session.execute_command([
"python", "process_data.py",
"--input", "/data/dataset.csv",
"--credentials", "/creds/api_key.txt"
])
# Write results to workspace (read-write, persists on host)
session.write_file(Path("/workspace/results.json"), result.stdout)
session.close()
Example 6 (full control - for advanced use cases)
Note: This example shows the low-level API for maximum control. For most use cases, use get_docker_session() instead.
from pathlib import Path
from podkit.backends.docker import DockerBackend
from podkit.core.manager import BaseContainerManager
from podkit.core.models import ContainerConfig
from podkit.core.session import BaseSessionManager
from podkit.monitors.health import ContainerHealthMonitor
from podkit.utils.paths import get_workspace_path, write_to_mounted_path
# You must provide concrete ContainerManager implementation
# BaseContainerManager requires implementing get_mounts() and write_file()
class MyContainerManager(BaseContainerManager):
"""Custom container manager with project-specific mount logic."""
def get_mounts(self, user_id: str, session_id: str, config: ContainerConfig):
"""Define how exactly to mount volumes."""
workspace_path = get_workspace_path(self.workspace_base, user_id, session_id)
workspace_path.mkdir(parents=True, exist_ok=True)
return [{
"Type": "bind",
"Source": str(workspace_path),
"Target": "/workspace",
}]
def write_file(self, container_id, container_path, content, user_id, session_id):
"""Write file to mounted filesystem (persists)."""
return write_to_mounted_path(
container_path,
content,
lambda path: self.to_host_path(path, user_id, session_id),
)
backend = DockerBackend()
backend.connect()
container_manager = MyContainerManager(
backend=backend,
container_prefix="podkit",
workspace_base=Path("/tmp/podkit_workspace"),
)
# Setup health monitoring for production deployments (optional but recommended)
# The monitor runs in a background thread and provides automatic recovery
health_monitor = ContainerHealthMonitor(
container_manager=container_manager,
check_interval=30, # Check container health every 30 seconds
log_lines=50 # Capture last 50 log lines for failed containers
)
# Pass health_monitor to session manager - it will register handler and start automatically
session_manager = BaseSessionManager(
container_manager=container_manager,
default_image="python:3.11-alpine",
health_monitor=health_monitor, # Auto-starts monitoring with recovery handler
)
# Health monitor now runs in background providing:
# - Automatic container recovery (restart if possible, recreate if needed)
# - Session cleanup (removes expired sessions)
# - Smart failure handling (marks sessions for recreation on next use)
# Configuration with auto-shutdown (entrypoint=None, default behavior)
# Container runs for 5 minutes then auto-exits (via sleep command)
sandbox_config = ContainerConfig(
image="python:3.11-alpine",
container_lifetime_seconds=300, # Container auto-exits after 5 minutes
cpu_limit=1.0,
memory_limit="512m",
environment={
"PYTHONUNBUFFERED": "1",
"LOG_LEVEL": "DEBUG",
},
)
session = session_manager.create_session(
user_id="user",
session_id="session",
config=sandbox_config,
)
# Execute commands - if container exited (timeout), it auto-restarts
result = session_manager.execute_command(
user_id="user",
session_id="session",
command=["sh", "-c", "echo 'Hello'"],
)
print(result.stdout)
session_manager.write_file(
user_id="user",
session_id="session",
container_path=Path("/workspace/file.txt"),
content="Hello from podkit",
)
# Configuration without auto-shutdown (explicit entrypoint disables it)
# When entrypoint=[] is set, container uses "sleep infinity" and runs until manually closed
# Note: container_lifetime_seconds is IGNORED when explicit entrypoint is set
no_timeout_config = ContainerConfig(
image="python:3.11-alpine",
entrypoint=[], # Explicit empty entrypoint = sleep infinity, no auto-shutdown
)
session2 = session_manager.create_session(
user_id="user2",
session_id="session2",
config=no_timeout_config,
)
# This container runs indefinitely until manually closed (below)
# Session may still expire due to inactivity (controlled by session_inactivity_timeout_seconds)
# Cleanup
session_manager.close_session("user", "session")
session_manager.close_session("user2", "session2")
Container Image Requirements
When using ProcessManager (podkit/processes/manager.py) to manage background processes, your container images must include specific system utilities:
Required Dependencies
procps- Provides full-featuredpscommand with process state inspection- Used for checking process status and detecting zombie processes
- Note: Busybox
ps(default in Alpine) doesn't support required flags
coreutils- Standard Unix utilities (mkdir,cat,tail, etc.)sh- Shell for executing commands (typically pre-installed)
Optional Dependencies
lsof- Enables automatic port detection for running processes- If missing, port detection is gracefully skipped
Installation Examples
Alpine Linux:
RUN apk add --no-cache procps coreutils lsof
Debian/Ubuntu:
RUN apt-get update && apt-get install -y procps coreutils lsof && rm -rf /var/lib/apt/lists/*
Note: If you're only using basic container operations (command execution, file I/O) without ProcessManager, these dependencies are not required.
Development Setup
Prerequisites
- Docker
- uv
Installation
./scripts/install.sh
Running Tests
Integration Tests (Recommended)
Run tests in Docker container (most realistic):
./scripts/test.sh
This will:
- Build the test runner container with all dependencies
- Mount the Docker socket and test workspace
- Run pytest with the integration tests
- Clean up automatically
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file podkit-0.5.0.tar.gz.
File metadata
- Download URL: podkit-0.5.0.tar.gz
- Upload date:
- Size: 94.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b70a42374959869a6ccff66563cca9e2cccab0afa9bb9027565df05ad9964665
|
|
| MD5 |
4dd59a8262f2f62a8cb182926f2e7b3f
|
|
| BLAKE2b-256 |
2bcab1a5f9c77dbbcecfdb0a8118988b92072912e657d31507f74e8f6da43b03
|
File details
Details for the file podkit-0.5.0-py3-none-any.whl.
File metadata
- Download URL: podkit-0.5.0-py3-none-any.whl
- Upload date:
- Size: 45.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b432598da34469b910ce21492599ce5894dd9c83bd58333ee37ca18c84ab4f61
|
|
| MD5 |
d29febf2ee93226cb1e64879f6fb6d48
|
|
| BLAKE2b-256 |
4542236dad824b7dae3f67524cc7fb4c2784397ce05699b23b2cee5a79f03c3d
|