Skip to main content

🐍 | Python library for Runpod API and serverless worker SDK.

Project description

Runpod | Python Library

PyPI Package   Downloads

CI | End-to-End Runpod Python Tests

CI | Unit Tests   CI | CodeQL

Welcome to the official Python library for Runpod API & SDK.

Table of Contents

💻 | Installation

Install from PyPI (Stable Release)

# Install with pip
pip install runpod

# Install with uv (faster alternative)
uv add runpod

Install from GitHub (Latest Changes)

To get the latest changes that haven't been released to PyPI yet:

# Install latest development version from main branch with pip
pip install git+https://github.com/runpod/runpod-python.git

# Install with uv
uv add git+https://github.com/runpod/runpod-python.git

# Install a specific branch
pip install git+https://github.com/runpod/runpod-python.git@branch-name

# Install a specific tag/release
pip install git+https://github.com/runpod/runpod-python.git@v1.0.0

# Install in editable mode for development
git clone https://github.com/runpod/runpod-python.git
cd runpod-python
pip install -e .

Python 3.8 or higher is required to use the latest version of this package.

⚡ | Serverless Worker (SDK)

This python package can also be used to create a serverless worker that can be deployed to Runpod as a custom endpoint API.

Quick Start

Create a python script in your project that contains your model definition and the Runpod worker start code. Run this python code as your default container start command:

# my_worker.py

import runpod

def is_even(job):

    job_input = job["input"]
    the_number = job_input["number"]

    if not isinstance(the_number, int):
        return {"error": "Silly human, you need to pass an integer."}

    if the_number % 2 == 0:
        return True

    return False

runpod.serverless.start({"handler": is_even})

Make sure that this file is ran when your container starts. This can be accomplished by calling it in the docker command when you set up a template at console.runpod.io/serverless/user/templates or by setting it as the default command in your Dockerfile.

See our blog post for creating a basic Serverless API, or view the details docs for more information.

Local Test Worker

You can also test your worker locally before deploying it to Runpod. This is useful for debugging and testing.

python my_worker.py --rp_serve_api

Worker Fitness Checks

Fitness checks allow you to validate your worker environment at startup before processing jobs. If any check fails, the worker exits immediately, allowing your orchestrator to restart it.

# my_worker.py

import runpod
import torch

# Register fitness checks using the decorator
@runpod.serverless.register_fitness_check
def check_gpu_available():
    """Verify GPU is available."""
    if not torch.cuda.is_available():
        raise RuntimeError("GPU not available")

@runpod.serverless.register_fitness_check
def check_disk_space():
    """Verify sufficient disk space."""
    import shutil
    stat = shutil.disk_usage("/")
    free_gb = stat.free / (1024**3)
    if free_gb < 10:
        raise RuntimeError(f"Insufficient disk space: {free_gb:.2f}GB free")

def handler(job):
    job_input = job["input"]
    # Your handler code here
    return {"output": "success"}

# Fitness checks run before handler initialization (production only)
runpod.serverless.start({"handler": handler})

Key Features:

  • Supports both synchronous and asynchronous check functions
  • Checks run only once at worker startup (production mode)
  • Runs before handler initialization and job processing begins
  • Any check failure exits with code 1 (worker marked unhealthy)

See Worker Fitness Checks documentation for more examples and best practices.

📚 | API Language Library (GraphQL Wrapper)

When interacting with the Runpod API you can use this library to make requests to the API.

import runpod

runpod.api_key = "your_runpod_api_key_found_under_settings"

Endpoints

You can interact with Runpod endpoints via a run or run_sync method.

Basic Usage

endpoint = runpod.Endpoint("ENDPOINT_ID")

run_request = endpoint.run(
    {"your_model_input_key": "your_model_input_value"}
)

# Check the status of the endpoint run request
print(run_request.status())

# Get the output of the endpoint run request, blocking until the endpoint run is complete.
print(run_request.output())
endpoint = runpod.Endpoint("ENDPOINT_ID")

run_request = endpoint.run_sync(
    {"your_model_input_key": "your_model_input_value"}
)

# Returns the job results if completed within 90 seconds, otherwise, returns the job status.
print(run_request )

API Key Management

The SDK supports multiple ways to set API keys:

1. Global API Key (Default)

import runpod

# Set global API key
runpod.api_key = "your_runpod_api_key"

# All endpoints will use this key by default
endpoint = runpod.Endpoint("ENDPOINT_ID")
result = endpoint.run_sync({"input": "data"})

2. Endpoint-Specific API Key

# Create endpoint with its own API key
endpoint = runpod.Endpoint("ENDPOINT_ID", api_key="specific_api_key")

# This endpoint will always use the provided API key
result = endpoint.run_sync({"input": "data"})

API Key Precedence

The SDK uses this precedence order (highest to lowest):

  1. Endpoint instance API key (if provided to Endpoint())
  2. Global API key (set via runpod.api_key)
import runpod

# Example showing precedence
runpod.api_key = "GLOBAL_KEY"

# This endpoint uses GLOBAL_KEY
endpoint1 = runpod.Endpoint("ENDPOINT_ID")

# This endpoint uses ENDPOINT_KEY (overrides global)
endpoint2 = runpod.Endpoint("ENDPOINT_ID", api_key="ENDPOINT_KEY")

# All requests from endpoint2 will use ENDPOINT_KEY
result = endpoint2.run_sync({"input": "data"})

Thread-Safe Operations

Each Endpoint instance maintains its own API key, making concurrent operations safe:

import threading
import runpod

def process_request(api_key, endpoint_id, input_data):
    # Each thread gets its own Endpoint instance
    endpoint = runpod.Endpoint(endpoint_id, api_key=api_key)
    return endpoint.run_sync(input_data)

# Safe concurrent usage with different API keys
threads = []
for customer in customers:
    t = threading.Thread(
        target=process_request,
        args=(customer["api_key"], customer["endpoint_id"], customer["input"])
    )
    threads.append(t)
    t.start()

GPU Cloud (Pods)

import runpod

runpod.api_key = "your_runpod_api_key_found_under_settings"

# Get all my pods
pods = runpod.get_pods()

# Get a specific pod
pod = runpod.get_pod(pod.id)

# Create a pod with GPU
pod = runpod.create_pod("test", "runpod/stack", "NVIDIA GeForce RTX 3070")

# Create a pod with CPU
pod = runpod.create_pod("test", "runpod/stack", instance_id="cpu3c-2-4")

# Stop the pod
runpod.stop_pod(pod.id)

# Resume the pod
runpod.resume_pod(pod.id)

# Terminate the pod
runpod.terminate_pod(pod.id)

📁 | Directory

.
├── docs               # Documentation
├── examples           # Examples
├── runpod             # Package source code   ├── api_wrapper    # Language library - API (GraphQL)   ├── cli            # Command Line Interface Functions   ├── endpoint       # Language library - Endpoints   └── serverless     # SDK - Serverless Worker
└── tests              # Package tests

🤝 | Community and Contributing

We welcome both pull requests and issues on GitHub. Bug fixes and new features are encouraged, but please read our contributing guide first.

Discord Banner 2

Project details


Release history Release notifications | RSS feed

This version

1.9.0

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

runpod-1.9.0.tar.gz (570.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

runpod-1.9.0-py3-none-any.whl (328.0 kB view details)

Uploaded Python 3

File details

Details for the file runpod-1.9.0.tar.gz.

File metadata

  • Download URL: runpod-1.9.0.tar.gz
  • Upload date:
  • Size: 570.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for runpod-1.9.0.tar.gz
Algorithm Hash digest
SHA256 77ba9b45c4b926d88f80988e804d3866b66979f33d07605d616e3f5539028764
MD5 3e5c2a8961557c0d8771544af9d57c16
BLAKE2b-256 4cff03855d2b9de4466719f8e5ddebf27aba6812dd4337be34dbf9d67ffe87b6

See more details on using hashes here.

Provenance

The following attestation bundles were made for runpod-1.9.0.tar.gz:

Publisher: CD-publish_to_pypi.yml on runpod/runpod-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file runpod-1.9.0-py3-none-any.whl.

File metadata

  • Download URL: runpod-1.9.0-py3-none-any.whl
  • Upload date:
  • Size: 328.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for runpod-1.9.0-py3-none-any.whl
Algorithm Hash digest
SHA256 941b4a54611f9ad7b71609d0d4d10aa735422305f4c8a1574b79e8fe8e27b0f9
MD5 7b242fa2743588b7cb5c801f03b04d78
BLAKE2b-256 f5f9cecaedaef9124e05d3b0e0c16ea717f2eae1df95297fa874d32c937087d5

See more details on using hashes here.

Provenance

The following attestation bundles were made for runpod-1.9.0-py3-none-any.whl:

Publisher: CD-publish_to_pypi.yml on runpod/runpod-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page