Skip to main content

Production-grade Python SDK for running Temporal workers with zero boilerplate

Project description

Temporal Worker SDK

Production-grade Python SDK that wraps the official temporalio SDK to eliminate boilerplate when running Temporal workers.

Problem

Every team running a Temporal worker has to write the same boilerplate:

  • Connect to Temporal server
  • Read config from environment
  • Handle graceful shutdown
  • Set up structured logging
  • Expose health probes for Kubernetes
  • Export Prometheus metrics

This SDK centralizes all of that. Developers only write their business logic.

Installation

pip install temporal-worker-sdk

Quick Start

Define your workflows and activities:

# workflows.py
from temporalio import workflow, activity
from datetime import timedelta

@activity.defn
async def process_payment(order_id: str) -> None:
    """Process a payment for an order."""
    print(f"Processing payment for order {order_id}")

@workflow.defn
class PaymentWorkflow:
    @workflow.run
    async def run(self, order_id: str) -> None:
        await workflow.execute_activity(
            process_payment,
            order_id,
            start_to_close_timeout=timedelta(seconds=60),
        )

Start the worker:

# main.py
import asyncio
from dotenv import load_dotenv
from temporal_worker_sdk import TemporalSDK
from workflows import process_payment, PaymentWorkflow

async def main():
    load_dotenv()  # Load .env for local dev
    sdk = TemporalSDK()
    sdk.register_activities(process_payment)
    sdk.register_workflows(PaymentWorkflow)
    await sdk.start()

if __name__ == "__main__":
    asyncio.run(main())

That's it. The SDK handles:

  • ✅ Connecting to Temporal
  • ✅ Reading config from environment variables
  • ✅ Graceful shutdown on signals
  • ✅ Structured JSON logging
  • ✅ Kubernetes health probes (liveness + readiness)
  • ✅ Prometheus metrics at /metrics

Configuration

All configuration comes from environment variables. No code changes needed for different environments.

Required

  • WORKER_TASK_QUEUE — Task queue name for this worker

Optional

  • TEMPORAL_HOST — Temporal server host (default: localhost)
  • TEMPORAL_PORT — Temporal server port (default: 7233)
  • TEMPORAL_NAMESPACE — Temporal namespace (default: default)
  • WORKER_MAX_CONCURRENT_ACTIVITIES — Max concurrent activities (default: 100)
  • WORKER_MAX_CONCURRENT_WORKFLOW_TASKS — Max concurrent workflow tasks (default: 40)
  • WORKER_GRACEFUL_SHUTDOWN_TIMEOUT — Graceful shutdown timeout in seconds (default: 30)
  • HEALTH_PROBE_HOST — Health probe server host (default: 0.0.0.0)
  • HEALTH_PROBE_PORT — Health probe server port (default: 8080)
  • HEALTH_PROBE_ENABLED — Enable health probe server (default: true)

Example .env

TEMPORAL_HOST=localhost
TEMPORAL_PORT=7233
TEMPORAL_NAMESPACE=temporal
WORKER_TASK_QUEUE=default
WORKER_MAX_CONCURRENT_ACTIVITIES=100
WORKER_MAX_CONCURRENT_WORKFLOW_TASKS=40
WORKER_GRACEFUL_SHUTDOWN_TIMEOUT=30
HEALTH_PROBE_HOST=0.0.0.0
HEALTH_PROBE_PORT=8080
HEALTH_PROBE_ENABLED=true

Health Probes

The SDK automatically exposes HTTP endpoints for Kubernetes:

  • GET /health/live — Liveness probe (process is alive)
  • GET /health/ready — Readiness probe (worker is connected and ready)
  • GET /metrics — Prometheus metrics

Use in your Kubernetes deployment:

livenessProbe:
  httpGet:
    path: /health/live
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /health/ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5

Metrics

The SDK exports Prometheus metrics about worker activity:

  • temporal_tasks_started_total — Total tasks started
  • temporal_tasks_completed_total — Total tasks completed
  • temporal_tasks_failed_total — Total tasks failed
  • temporal_task_duration_seconds — Task execution duration
  • temporal_worker_connected — Worker connection status (1=connected, 0=disconnected)

Scrape with a Kubernetes ServiceMonitor:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: temporal-worker
spec:
  selector:
    matchLabels:
      app: temporal-worker
  endpoints:
    - port: metrics
      interval: 30s
      path: /metrics

Logging

The SDK uses structured JSON logging. All logs include:

  • timestamp — ISO 8601 timestamp
  • level — Log level (INFO, WARNING, ERROR, etc.)
  • logger — Logger name (module)
  • message — Log message
  • extra — Additional context fields

Example log:

{
  "timestamp": "2026-04-27 18:10:37,235",
  "level": "INFO",
  "logger": "workflows",
  "message": "Processing task",
  "extra": {"order_id": "12345"}
}

Distributed Workers

For multiple worker types, use separate deployments with different task queues:

# email-worker/main.py
sdk = TemporalSDK()
sdk.register_activities(send_email, send_sms)
await sdk.start()

# payment-worker/main.py
sdk = TemporalSDK()
sdk.register_activities(process_payment, refund_payment)
await sdk.start()

In your workflow, route activities to specific task queues:

@workflow.defn
class OrderWorkflow:
    @workflow.run
    async def run(self, order_id: str) -> None:
        # Route to email task queue
        await workflow.execute_activity(
            send_email,
            order_id,
            task_queue="email-tasks"
        )
        
        # Route to payment task queue
        await workflow.execute_activity(
            process_payment,
            order_id,
            task_queue="payment-tasks"
        )

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: temporal-worker
spec:
  replicas: 3
  selector:
    matchLabels:
      app: temporal-worker
  template:
    metadata:
      labels:
        app: temporal-worker
    spec:
      containers:
      - name: worker
        image: your-worker:latest
        env:
        - name: TEMPORAL_HOST
          value: temporal-frontend.temporal.svc.cluster.local
        - name: TEMPORAL_NAMESPACE
          value: temporal
        - name: WORKER_TASK_QUEUE
          value: default
        ports:
        - name: metrics
          containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health/live
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

temporal_worker_sdk-0.1.0.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

temporal_worker_sdk-0.1.0-py3-none-any.whl (10.8 kB view details)

Uploaded Python 3

File details

Details for the file temporal_worker_sdk-0.1.0.tar.gz.

File metadata

  • Download URL: temporal_worker_sdk-0.1.0.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for temporal_worker_sdk-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ec20dd7a99ec915b4e42075d6d0ddfcb45f74e92aae4f651c41e2ed3a533cb40
MD5 7e606ed6b8a39039a44d599a7dc97481
BLAKE2b-256 4862311d85685e2a7906ec230d4b6c83032a7fe805d4c3eb192044286ef7cc01

See more details on using hashes here.

File details

Details for the file temporal_worker_sdk-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for temporal_worker_sdk-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a486e8dc1d9f255b55e764da56ef45ff61b22cb5adac2b46a89efd1c0c124a85
MD5 d987e780b53fe239aa0d90639b12bc30
BLAKE2b-256 a943acfde25bdd29a34bbd7dfedee3f3366a29b116edab7ff7d5234f840688b7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page