Skip to main content

Python SDK for the OpenReward platform.

Project description

OpenReward Python SDK

PyPI version Python 3.11+ Docs

The official Python SDK for OpenReward — a platform for building, hosting, and training on RL environments for language models.

The SDK has two complementary roles:

  • Build environments — define evaluation tasks, expose tools, and serve them via a standards-compliant API that can be deployed on the OpenReward platform.
  • Train agents — connect to any environment (local or hosted), run agent loops, and log rollouts with rewards back to OpenReward.

Installation

pip install openreward

For environments that process documents (PDF, DOCX, Excel, PowerPoint):

pip install "openreward[tools]"

Requires Python 3.11+.

Core concepts

Environment

An Environment subclass defines a benchmark or task distribution. Implement three required methods:

Method Purpose
list_splits() Return split names, e.g. ["train", "test"]
list_tasks(split) Return a deterministically ordered list of task dicts
get_prompt() Return the task instructions as a list of TextBlock / ImageBlock

Actions are defined as async methods decorated with @tool. Each tool receives a Pydantic model as input and returns a ToolOutput.

ToolOutput

Every tool returns a ToolOutput containing:

  • blocks — a list of TextBlock or ImageBlock results
  • reward — optional float reward signal
  • finished — whether the episode is complete
  • metadata — optional arbitrary metadata

Server

Server wraps one or more Environment classes in a FastAPI app and exposes the Open Reward Standard API over HTTP with SSE streaming.

Key endpoints:

Endpoint Description
POST /create Spawn a new environment session
POST /{env}/call Execute a tool (streamed via SSE)
GET /{env}/prompt Get the current task prompt
GET /{env}/tools List available tools
POST /{env}/tasks List all tasks for a split

Sandboxes

Environments that need isolated compute (e.g. code execution) can spin up Docker containers via the sandbox API using SandboxSettings. Containers are managed automatically — started in setup() and torn down in teardown().

Toolsets

Group reusable tools into Toolset classes and compose them across environments via the toolsets class attribute.

Rollout logging

Log agent trajectories with reward signals back to OpenReward for analysis and training. The client's rollout API supports normalized message types as well as raw outputs from Anthropic, OpenAI, and Google GenAI SDKs.

CLI

The orwd CLI helps you scaffold and create environments.

Scaffold a new environment locally

# Minimal environment
orwd init my-env

# Environment with a Docker sandbox for code execution
orwd init my-env --template sandbox

Create an environment on OpenReward

Registers a new environment under your account (requires OPENREWARD_API_KEY):

orwd create my-env --description "A short description of my environment"

By default the environment is created under your personal namespace. To create it under an organisation you are a member of, pass --namespace:

orwd create my-env --description "A short description" --namespace my-org

Pass --private to make the environment private:

orwd create my-env --description "A short description" --private

Deploying to OpenReward

  1. Push your environment to a GitHub repository.
  2. Connect the repository in the OpenReward dashboard.
  3. Configure compute resources (CPU, memory, scaling).
  4. Every push to the connected branch triggers an automatic build and deployment.

Your environment is then accessible to any agent via the OpenReward API using the username/environment-name namespace.

Environment variables

Variable Description
OPENREWARD_API_KEY API key for authentication
OPENREWARD_URL Override base URL (default: https://openreward.ai)
OPENREWARD_USE_STRUCTURED_LOGS Set to 1 for JSON logging (recommended in production)
OPENREWARD_ROLLOUT_LOGGING_FORMAT pretty or structured for rollout log output

Documentation

Full documentation, guides, and examples are at docs.openreward.ai.

License

Apache 2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openreward-0.1.96.dev0.tar.gz (106.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openreward-0.1.96.dev0-py3-none-any.whl (108.9 kB view details)

Uploaded Python 3

File details

Details for the file openreward-0.1.96.dev0.tar.gz.

File metadata

  • Download URL: openreward-0.1.96.dev0.tar.gz
  • Upload date:
  • Size: 106.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for openreward-0.1.96.dev0.tar.gz
Algorithm Hash digest
SHA256 7a6db307223a16edd4655927e1d9623fceb5135ab987cac78ffd9b0d38c1bc07
MD5 05f916f1e08be4ee2ef7d19f8f0e8f6e
BLAKE2b-256 5537cfeb89d5f4baa1457ad165df138f8f0d4cddcfb58252301dac3df4f1dff9

See more details on using hashes here.

File details

Details for the file openreward-0.1.96.dev0-py3-none-any.whl.

File metadata

File hashes

Hashes for openreward-0.1.96.dev0-py3-none-any.whl
Algorithm Hash digest
SHA256 ca6df762b8fd98b0b8ab67a94c147c8b283f27ca4a4c2e0ed3c04e6f3e579083
MD5 02dc5401ef633b8930ca0ef98caf23f7
BLAKE2b-256 ae1b610ec9e37a1f9d420f282ef41b8371b835e5c6b3707e82b93ea76e2c4eac

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page