Skip to main content

Python SDK for the OpenReward platform.

Project description

OpenReward Python SDK

PyPI version Python 3.11+ Docs

The official Python SDK for OpenReward — a platform for building, hosting, and training on RL environments for language models.

The SDK has two complementary roles:

  • Build environments — define evaluation tasks, expose tools, and serve them via a standards-compliant API that can be deployed on the OpenReward platform.
  • Train agents — connect to any environment (local or hosted), run agent loops, and log rollouts with rewards back to OpenReward.

Installation

pip install openreward

For environments that process documents (PDF, DOCX, Excel, PowerPoint):

pip install "openreward[tools]"

Requires Python 3.11+.

Core concepts

Environment

An Environment subclass defines a benchmark or task distribution. Implement three required methods:

Method Purpose
list_splits() Return split names, e.g. ["train", "test"]
list_tasks(split) Return a deterministically ordered list of task dicts
get_prompt() Return the task instructions as a list of TextBlock / ImageBlock

Actions are defined as async methods decorated with @tool. Each tool receives a Pydantic model as input and returns a ToolOutput.

ToolOutput

Every tool returns a ToolOutput containing:

  • blocks — a list of TextBlock or ImageBlock results
  • reward — optional float reward signal
  • finished — whether the episode is complete
  • metadata — optional arbitrary metadata

Server

Server wraps one or more Environment classes in a FastAPI app and exposes the Open Reward Standard API over HTTP with SSE streaming.

Key endpoints:

Endpoint Description
POST /create Spawn a new environment session
POST /{env}/call Execute a tool (streamed via SSE)
GET /{env}/prompt Get the current task prompt
GET /{env}/tools List available tools
POST /{env}/tasks List all tasks for a split

Sandboxes

Environments that need isolated compute (e.g. code execution) can spin up Docker containers via the sandbox API using SandboxSettings. Containers are managed automatically — started in setup() and torn down in teardown().

Toolsets

Group reusable tools into Toolset classes and compose them across environments via the toolsets class attribute.

Rollout logging

Log agent trajectories with reward signals back to OpenReward for analysis and training. The client's rollout API supports normalized message types as well as raw outputs from Anthropic, OpenAI, and Google GenAI SDKs.

CLI

The orwd CLI helps you scaffold and create environments.

Scaffold a new environment locally

# Minimal environment
orwd init my-env

# Environment with a Docker sandbox for code execution
orwd init my-env --template sandbox

Create an environment on OpenReward

Registers a new environment under your account (requires OPENREWARD_API_KEY):

orwd create my-env --description "A short description of my environment"

By default the environment is created under your personal namespace. To create it under an organisation you are a member of, pass --namespace:

orwd create my-env --description "A short description" --namespace my-org

Pass --private to make the environment private:

orwd create my-env --description "A short description" --private

Deploying to OpenReward

  1. Push your environment to a GitHub repository.
  2. Connect the repository in the OpenReward dashboard.
  3. Configure compute resources (CPU, memory, scaling).
  4. Every push to the connected branch triggers an automatic build and deployment.

Your environment is then accessible to any agent via the OpenReward API using the username/environment-name namespace.

Environment variables

Variable Description
OPENREWARD_API_KEY API key for authentication
OPENREWARD_URL Override base URL (default: https://openreward.ai)
OPENREWARD_USE_STRUCTURED_LOGS Set to 1 for JSON logging (recommended in production)
OPENREWARD_ROLLOUT_LOGGING_FORMAT pretty or structured for rollout log output

Documentation

Full documentation, guides, and examples are at docs.openreward.ai.

License

Apache 2.0

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openreward-0.1.95.tar.gz (101.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openreward-0.1.95-py3-none-any.whl (103.1 kB view details)

Uploaded Python 3

File details

Details for the file openreward-0.1.95.tar.gz.

File metadata

  • Download URL: openreward-0.1.95.tar.gz
  • Upload date:
  • Size: 101.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.6 {"installer":{"name":"uv","version":"0.10.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for openreward-0.1.95.tar.gz
Algorithm Hash digest
SHA256 d06d91ab4ae6d5da5e075e414c1abd7f09d12f575ca8ffbce710712ee786dffd
MD5 2292eced43069b7b9130f23a7e8f6381
BLAKE2b-256 571145e3efe3e5f7f3bdbf15023f2ce5468d677a6091bfe781e1d9c7ed54da10

See more details on using hashes here.

File details

Details for the file openreward-0.1.95-py3-none-any.whl.

File metadata

  • Download URL: openreward-0.1.95-py3-none-any.whl
  • Upload date:
  • Size: 103.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.6 {"installer":{"name":"uv","version":"0.10.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for openreward-0.1.95-py3-none-any.whl
Algorithm Hash digest
SHA256 01c9b5516e7a55350fc90290e1e355c666ff57e71f5189112eb81302344ba24c
MD5 d3a59290a9e47a1848d5b10744b1f162
BLAKE2b-256 ba1300c968ee51cbeeac5854d43f46269ca4885c20ccfa9de9dee88a5e8e0bde

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page