Skip to main content

Container management library with backend abstraction for sandboxed execution

Project description

Podkit - Simple Container Management Library

A Python library for sandboxed execution in Docker containers with backend abstraction

Features

Podkit implements a clean three-layer architecture for flexible container management:

Layer 1 (Backend) provides runtime-agnostic infrastructure operations for Docker/Kubernetes with image management and workload execution;

Layer 2 (ContainerManager) bridges infrastructure and application logic with container lifecycle management, project-specific mounting strategies, and host-to-container path translation;

Layer 3 (SessionManager) delivers the user-facing API with session lifecycle tracking, automatic activity monitoring, and cleanup of expired sessions.

This separation enables backend portability (swap Docker for Podman or Kubernetes without touching business logic), customizable project configurations (different mounting strategies per project), and independent testing of each layer.

Example 1 (the simplest one)

# Auto-creates session OR reconnects to the existing running/exited container (auto-stopping after 1 min)

from podkit import get_docker_session

result = get_docker_session(user_id="bob", session_id="123").execute_command("pwd")
print(result.stdout)

# No auto-removing in this case, only auto-stopping!
# You may not close the session if you expect running some commands in this session again.
# Otherwise close the session manually, like in example 3.

Example 2 (simple with auto-cleanup)

# auto-cleanup with context manager (container will be removed, slower than example 1)

from podkit import get_docker_session

with get_docker_session(user_id="bob", session_id="123") as session:
    result = session.execute_command("pwd")
    print(result.stdout)

# Perfect when you need one-time execution - run the command and clean up resources right away

Example 3 (with port exposure)

# Expose ports from container to host

from pathlib import Path
from podkit.core.models import ContainerConfig
from podkit import get_docker_session

# Create config with exposed ports
config = ContainerConfig(
    image="nginx:latest",
    ports=[80, 443]  # Expose nginx on host ports 80 and 443
)

session = get_docker_session(
    user_id="bob",
    session_id="web-server",
    config=config
)

# nginx now accessible at http://localhost:80 and https://localhost:443
result = session.execute_command(["nginx", "-g", "daemon off;"])

session.close()

Example 4 (multiple mounts with read-only support)

# Multiple volume mounts: persistent workspace + read-only shared data

from pathlib import Path
from podkit import get_docker_session
from podkit.core.models import ContainerConfig, Mount

# Configure container with multiple mounts
config = ContainerConfig(
    image="python:3.11-alpine",
    volumes=[
        # Read-only mount for shared datasets (prevents accidental modifications)
        Mount(type="bind", source=Path("/shared/datasets"), target=Path("/data"), read_only=True),
        # Read-only mount for shared configuration files
        Mount(type="bind", source=Path("/shared/configs"), target=Path("/etc/configs"), read_only=True),
    ]
)

session = get_docker_session(
    user_id="bob",
    session_id="123",
    workspace="/app/data/workspace",
    workspace_host="./data/workspace",  # Workspace remains read-write for outputs
    config=config
)

# Read from shared data (read-only, safe from accidental changes)
result = session.execute_command(["cat", "/data/dataset.csv"])
print(result.stdout)

# Write to workspace (read-write, files persist on host)
session.write_file(Path("/workspace/results.txt"), "processing complete")

session.close()

Example 5 (production-ready configuration)

# Comprehensive example: combines networks, resource limits, read-only mounts, and environment

from pathlib import Path
from podkit.core.models import ContainerConfig, Mount
from podkit import get_docker_session

# Configure container with multiple production features
config = ContainerConfig(
    image="python:3.11-alpine",
    # Resource limits
    cpu_limit=2.0,  # 2 CPU cores max
    memory_limit="1g",  # 1 GB RAM max
    # Network configuration (e.g., for service-to-service communication)
    networks=["execution-network"],
    # Read-only mounts for shared data and credentials
    volumes=[
        Mount(type="bind", source=Path("/shared/datasets"), target=Path("/data"), read_only=True),
        Mount(type="bind", source=Path("/shared/credentials"), target=Path("/creds"), read_only=True),
    ],
    # Environment variables
    environment={
        "ENV": "production",
        "LOG_LEVEL": "INFO",
        "DATABASE_URL": "postgresql://db.execution-network:5432/tasks",
    },
    # Custom startup command (optional - defaults to sleep with timeout)
    command=["tail", "-f", "/dev/null"],  # Keep container alive indefinitely
)

session = get_docker_session(
    user_id="bob",
    session_id="task-123",
    workspace="/app/workspaces",
    workspace_host="./workspaces",
    config=config
)

# Secure, resource-controlled execution with access to shared data
result = session.execute_command([
    "python", "process_data.py",
    "--input", "/data/dataset.csv",
    "--credentials", "/creds/api_key.txt"
])

# Write results to workspace (read-write, persists on host)
session.write_file(Path("/workspace/results.json"), result.stdout)

session.close()

Example 6 (full control - for advanced use cases)

Note: This example shows the low-level API for maximum control. For most use cases, use get_docker_session() instead.

from pathlib import Path

from podkit.backends.docker import DockerBackend
from podkit.core.manager import BaseContainerManager
from podkit.core.models import ContainerConfig
from podkit.core.session import BaseSessionManager
from podkit.utils.paths import get_workspace_path, write_to_mounted_path

# You must provide concrete ContainerManager implementation
# BaseContainerManager requires implementing get_mounts() and write_file()
class MyContainerManager(BaseContainerManager):
    """Custom container manager with project-specific mount logic."""

    def get_mounts(self, user_id: str, session_id: str, config: ContainerConfig):
        """Define how exactly to mount volumes."""

        workspace_path = get_workspace_path(self.workspace_base, user_id, session_id)
        workspace_path.mkdir(parents=True, exist_ok=True)

        return [{
            "Type": "bind",
            "Source": str(workspace_path),
            "Target": "/workspace",
        }]

    def write_file(self, container_id, container_path, content, user_id, session_id):
        """Write file to mounted filesystem (persists)."""

        return write_to_mounted_path(
            container_path,
            content,
            lambda path: self.to_host_path(path, user_id, session_id),
        )

backend = DockerBackend()
backend.connect()

container_manager = MyContainerManager(
    backend=backend,
    container_prefix="podkit",
    workspace_base=Path("/tmp/podkit_workspace"),
)

session_manager = BaseSessionManager(
    container_manager=container_manager,
    default_image="python:3.11-alpine",
)

# Configuration with auto-shutdown (entrypoint=None, default behavior)
# Container runs for 5 minutes then auto-exits (via sleep command)
sandbox_config = ContainerConfig(
    image="python:3.11-alpine",
    container_lifetime_seconds=300,  # Container auto-exits after 5 minutes
    cpu_limit=1.0,
    memory_limit="512m",
    environment={
        "PYTHONUNBUFFERED": "1",
        "LOG_LEVEL": "DEBUG",
    },
)

session = session_manager.create_session(
    user_id="user",
    session_id="session",
    config=sandbox_config,
)

# Execute commands - if container exited (timeout), it auto-restarts
result = session_manager.execute_command(
    user_id="user",
    session_id="session",
    command=["sh", "-c", "echo 'Hello'"],
)
print(result.stdout)

session_manager.write_file(
    user_id="user",
    session_id="session",
    container_path=Path("/workspace/file.txt"),
    content="Hello from podkit",
)

# Configuration without auto-shutdown (explicit entrypoint disables it)
# When entrypoint=[] is set, container uses "sleep infinity" and runs until manually closed
# Note: container_lifetime_seconds is IGNORED when explicit entrypoint is set
no_timeout_config = ContainerConfig(
    image="python:3.11-alpine",
    entrypoint=[],  # Explicit empty entrypoint = sleep infinity, no auto-shutdown
)

session2 = session_manager.create_session(
    user_id="user2",
    session_id="session2",
    config=no_timeout_config,
)
# This container runs indefinitely until manually closed (below)
# Session may still expire due to inactivity (controlled by session_inactivity_timeout_seconds)

# Cleanup
session_manager.close_session("user", "session")
session_manager.close_session("user2", "session2")

Container Image Requirements

When using ProcessManager (podkit/processes/manager.py) to manage background processes, your container images must include specific system utilities:

Required Dependencies

  • procps - Provides full-featured ps command with process state inspection
    • Used for checking process status and detecting zombie processes
    • Note: Busybox ps (default in Alpine) doesn't support required flags
  • coreutils - Standard Unix utilities (mkdir, cat, tail, etc.)
  • sh - Shell for executing commands (typically pre-installed)

Optional Dependencies

  • lsof - Enables automatic port detection for running processes
    • If missing, port detection is gracefully skipped

Installation Examples

Alpine Linux:

RUN apk add --no-cache procps coreutils lsof

Debian/Ubuntu:

RUN apt-get update && apt-get install -y procps coreutils lsof && rm -rf /var/lib/apt/lists/*

Note: If you're only using basic container operations (command execution, file I/O) without ProcessManager, these dependencies are not required.

Development Setup

Prerequisites

  • Docker
  • uv

Installation

./scripts/install.sh

Running Tests

Integration Tests (Recommended)

Run tests in Docker container (most realistic):

./scripts/test.sh

This will:

  1. Build the test runner container with all dependencies
  2. Mount the Docker socket and test workspace
  3. Run pytest with the integration tests
  4. Clean up automatically

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

podkit-0.4.0.tar.gz (86.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

podkit-0.4.0-py3-none-any.whl (37.9 kB view details)

Uploaded Python 3

File details

Details for the file podkit-0.4.0.tar.gz.

File metadata

  • Download URL: podkit-0.4.0.tar.gz
  • Upload date:
  • Size: 86.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for podkit-0.4.0.tar.gz
Algorithm Hash digest
SHA256 58e076a519af77a90d9e9356e372d194489f6fca8d5c6fb259dc38ec88ddcb49
MD5 a28cdae1c5e19703911e478d55942f40
BLAKE2b-256 aa868e8b160e64f6f3b275b9ae6e297b0e6b603fc80e95c70344fc6b992e948c

See more details on using hashes here.

File details

Details for the file podkit-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: podkit-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 37.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for podkit-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5c7d1e539102018a5b1f8f76b4edbde58d246356cde6a042cd32258b1c5a182f
MD5 235daf4268f0640526f61cc2f9e925af
BLAKE2b-256 688aafca6bc7e8ff04e650acbd868cc523a2c64c25848b97399044001ea82ed1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page