Skip to main content

Production-grade Python SDK for running Temporal workers with zero boilerplate

Project description

Temporal Worker SDK

Production-grade Python SDK that wraps the official temporalio SDK to eliminate boilerplate when running Temporal workers.

Problem

Every team running a Temporal worker has to write the same boilerplate:

  • Connect to Temporal server
  • Read config from environment
  • Handle graceful shutdown
  • Set up structured logging
  • Expose health probes for Kubernetes
  • Export Prometheus metrics

This SDK centralizes all of that. Developers only write their business logic.

Installation

pip install temporal-worker-sdk

Quick Start

Define your workflows and activities:

# workflows.py
from temporalio import workflow, activity
from datetime import timedelta

@activity.defn
async def process_payment(order_id: str) -> None:
    """Process a payment for an order."""
    print(f"Processing payment for order {order_id}")

@workflow.defn
class PaymentWorkflow:
    @workflow.run
    async def run(self, order_id: str) -> None:
        await workflow.execute_activity(
            process_payment,
            order_id,
            start_to_close_timeout=timedelta(seconds=60),
        )

Start the worker:

# main.py
import asyncio
from dotenv import load_dotenv
from temporal_worker_sdk import TemporalSDK
from workflows import process_payment, PaymentWorkflow

async def main():
    load_dotenv()  # Load .env for local dev
    sdk = TemporalSDK()
    sdk.register_activities(process_payment)
    sdk.register_workflows(PaymentWorkflow)
    await sdk.start()

if __name__ == "__main__":
    asyncio.run(main())

That's it. The SDK handles:

  • ✅ Connecting to Temporal
  • ✅ Reading config from environment variables
  • ✅ Graceful shutdown on signals
  • ✅ Structured JSON logging
  • ✅ Kubernetes health probes (liveness + readiness)
  • ✅ Prometheus metrics at /metrics

Configuration

All configuration comes from environment variables. No code changes needed for different environments.

Required

  • WORKER_TASK_QUEUE — Task queue name for this worker

Optional

  • TEMPORAL_HOST — Temporal server host (default: localhost)
  • TEMPORAL_PORT — Temporal server port (default: 7233)
  • TEMPORAL_NAMESPACE — Temporal namespace (default: default)
  • WORKER_MAX_CONCURRENT_ACTIVITIES — Max concurrent activities (default: 100)
  • WORKER_MAX_CONCURRENT_WORKFLOW_TASKS — Max concurrent workflow tasks (default: 40)
  • WORKER_GRACEFUL_SHUTDOWN_TIMEOUT — Graceful shutdown timeout in seconds (default: 30)
  • HEALTH_PROBE_HOST — Health probe server host (default: 0.0.0.0)
  • HEALTH_PROBE_PORT — Health probe server port (default: 8080)
  • HEALTH_PROBE_ENABLED — Enable health probe server (default: true)

Example .env

TEMPORAL_HOST=localhost
TEMPORAL_PORT=7233
TEMPORAL_NAMESPACE=temporal
WORKER_TASK_QUEUE=default
WORKER_MAX_CONCURRENT_ACTIVITIES=100
WORKER_MAX_CONCURRENT_WORKFLOW_TASKS=40
WORKER_GRACEFUL_SHUTDOWN_TIMEOUT=30
HEALTH_PROBE_HOST=0.0.0.0
HEALTH_PROBE_PORT=8080
HEALTH_PROBE_ENABLED=true

Health Probes

The SDK automatically exposes HTTP endpoints for Kubernetes:

  • GET /health/live — Liveness probe (process is alive)
  • GET /health/ready — Readiness probe (worker is connected and ready)
  • GET /metrics — Prometheus metrics

Use in your Kubernetes deployment:

livenessProbe:
  httpGet:
    path: /health/live
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 10

readinessProbe:
  httpGet:
    path: /health/ready
    port: 8080
  initialDelaySeconds: 5
  periodSeconds: 5

Metrics

The SDK exports Prometheus metrics about worker activity:

  • temporal_tasks_started_total — Total tasks started
  • temporal_tasks_completed_total — Total tasks completed
  • temporal_tasks_failed_total — Total tasks failed
  • temporal_task_duration_seconds — Task execution duration
  • temporal_worker_connected — Worker connection status (1=connected, 0=disconnected)

Scrape with a Kubernetes ServiceMonitor:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: temporal-worker
spec:
  selector:
    matchLabels:
      app: temporal-worker
  endpoints:
    - port: metrics
      interval: 30s
      path: /metrics

Logging

The SDK uses structured JSON logging. All logs include:

  • timestamp — ISO 8601 timestamp
  • level — Log level (INFO, WARNING, ERROR, etc.)
  • logger — Logger name (module)
  • message — Log message
  • extra — Additional context fields

Example log:

{
  "timestamp": "2026-04-27 18:10:37,235",
  "level": "INFO",
  "logger": "workflows",
  "message": "Processing task",
  "extra": {"order_id": "12345"}
}

Distributed Workers

For multiple worker types, use separate deployments with different task queues:

# email-worker/main.py
sdk = TemporalSDK()
sdk.register_activities(send_email, send_sms)
await sdk.start()

# payment-worker/main.py
sdk = TemporalSDK()
sdk.register_activities(process_payment, refund_payment)
await sdk.start()

In your workflow, route activities to specific task queues:

@workflow.defn
class OrderWorkflow:
    @workflow.run
    async def run(self, order_id: str) -> None:
        # Route to email task queue
        await workflow.execute_activity(
            send_email,
            order_id,
            task_queue="email-tasks"
        )
        
        # Route to payment task queue
        await workflow.execute_activity(
            process_payment,
            order_id,
            task_queue="payment-tasks"
        )

Kubernetes Deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: temporal-worker
spec:
  replicas: 3
  selector:
    matchLabels:
      app: temporal-worker
  template:
    metadata:
      labels:
        app: temporal-worker
    spec:
      containers:
      - name: worker
        image: your-worker:latest
        env:
        - name: TEMPORAL_HOST
          value: temporal-frontend.temporal.svc.cluster.local
        - name: TEMPORAL_NAMESPACE
          value: temporal
        - name: WORKER_TASK_QUEUE
          value: default
        ports:
        - name: metrics
          containerPort: 8080
        livenessProbe:
          httpGet:
            path: /health/live
            port: 8080
          initialDelaySeconds: 10
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health/ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

temporal_worker_sdk-1.1.3.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

temporal_worker_sdk-1.1.3-py3-none-any.whl (11.1 kB view details)

Uploaded Python 3

File details

Details for the file temporal_worker_sdk-1.1.3.tar.gz.

File metadata

  • Download URL: temporal_worker_sdk-1.1.3.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.4

File hashes

Hashes for temporal_worker_sdk-1.1.3.tar.gz
Algorithm Hash digest
SHA256 1ed5823bd9ff0be171c891fd72bcafb5443d37a24aa86ba7a505f4d63b6def84
MD5 540213328a8836e014cabdb606898552
BLAKE2b-256 4cb038761c48881d4979dc866e4b860230e362bf9be0ebfab12298be293db2f6

See more details on using hashes here.

File details

Details for the file temporal_worker_sdk-1.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for temporal_worker_sdk-1.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 75c0bb4c63e5e21f274e51cca7d39f75b37af83ab8ce67e777d59b6bc1db5ec3
MD5 595ce8662de0d71998fb007ab5af696b
BLAKE2b-256 e3ed41baf8fbd1844dec623e29550b8f75501a341a2081846ec6550a86325f11

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page