A Python client for simulating robot actions and generating augmented data using Runway's Robotics World Model API
Project description
Runway Robotics Python SDK
A Python SDK for interacting with Runway's General World Model (GWM-1) and video generation APIs, featuring action-conditioned world simulation and video-to-video transformation for robotics applications.
Features
- World Model — Generate predicted video sequences from robot actions and camera frames
- Image-to-Video — Generate video from a single reference image using text prompts
- Video-to-Video — Transform footage using text prompts and style references
Requirements
- Python 3.10+
- A Runway API key
Installation
pip install runway-robotics-sdk
Then set your API key as an environment variable:
export RUNWAYML_API_SECRET=your_api_key_here
Quick Start
All SDK functionality is accessed through the RunwayRobotics client:
from runway_robotics_sdk import RunwayRobotics
client = RunwayRobotics()
The client reads your API key from RUNWAYML_API_SECRET by default. You can also pass it explicitly:
client = RunwayRobotics(api_key="your_api_key_here")
| Parameter | Type | Description |
|---|---|---|
api_key |
str | None |
Runway API secret key. Defaults to the RUNWAYML_API_SECRET environment variable. |
base_url |
str | None |
Override the API base URL. Defaults to RUNWAYML_BASE_URL environment variable or the production endpoint. Use this if connecting to a custom environment. |
Robotics World Model
Generate predicted video sequences from initial camera frames and robot action trajectories using client.world_model.create().
Single-View Mode
world = client.world_model.create(
base_view=Image.open("data/base_view.jpg"),
)
actions = load_your_action_sequence()
for i in range(0, len(actions), world.chunk_size):
obs = world.step(actions[i : i + world.chunk_size])
world.save("outputs/trajectory.mp4")
Multi-View Mode
world = client.world_model.create(
base_view=Image.open("data/base_view.jpg"),
wrist_view=Image.open("data/wrist_view.jpg"),
)
actions = load_your_action_sequence()
for i in range(0, len(actions), world.chunk_size):
obs = world.step(actions[i : i + world.chunk_size])
# obs.base_image — latest base camera frame
# obs.wrist_image — latest wrist camera frame
world.save("outputs/trajectory.mp4")
Action Format
Each action is a 7-element list:
| Index | Value |
|---|---|
| 0–5 | End-effector pose (x, y, z, roll, pitch, yaw) |
| 6 | Gripper state (continuous) |
Actions are processed in chunks of world.chunk_size (17) frames. Shorter sequences are automatically padded to the chunk size.
Supported Robot Arms
The current model works well with the following robot arms:
| Robot Arm | Training Data |
|---|---|
| Franka Emika Panda | DROID dataset |
| WidowX 250 S | BridgeData V2 |
API Reference
client.world_model.create()
| Parameter | Type | Default | Description |
|---|---|---|---|
base_view |
PIL.Image | np.ndarray |
required | Starting base camera view |
wrist_view |
PIL.Image | np.ndarray | None |
None |
Starting wrist camera view. If None, single-view mode is used. |
timeout |
float |
60 |
Maximum seconds to wait for each API call |
show_progress |
bool |
False |
Display a progress bar |
Returns: A WorldModelEnv instance.
WorldModelEnv Methods:
| Method | Description |
|---|---|
step(action) |
Execute actions, return a StepOutput with .base_image and .wrist_image |
save(path) |
Save all generated frames as a video file (15 FPS) |
WorldModelEnv Properties:
| Property | Description |
|---|---|
frames |
List of all generated frames |
chunk_size |
Number of actions per API call (17) |
last_base_view |
Most recent base camera observation |
last_wrist_view |
Most recent wrist camera observation |
Image-to-Video
Generate video from a single reference image via client.image_to_video.generate(). Provide a starting frame and an optional text prompt describing the desired motion or scene.
Basic Usage
result = client.image_to_video.generate(
image="robot_workspace.png",
prompt="The robot arm reaches forward and grasps the red cup",
show_progress=True,
)
# Save the raw video to disk
result.save("outputs/generated.mp4")
# Or access decoded frames directly
frames = result.frames # np.ndarray of shape (T, H, W, C)
Input Formats
The image parameter accepts multiple formats:
| Format | Example |
|---|---|
| URI | "https://example.com/image.png" |
| Local file | "image.png" or Path("image.png") |
| PIL Image | Image.open("image.png") |
| Numpy array | (H, W, C) array |
API Reference
client.image_to_video.generate()
| Parameter | Type | Default | Description |
|---|---|---|---|
image |
str | Path | PIL.Image | np.ndarray |
required | Input image |
prompt |
str | None |
None |
Text description of desired output (1–1000 chars) |
ratio |
str | None |
None |
Output aspect ratio. Auto-selects from input dimensions if None. Options: "1280:720", "720:1280", "1104:832", "832:1104", "960:960", "1584:672" |
duration |
int |
10 |
Output video duration in seconds |
seed |
int | None |
None |
Random seed for reproducibility |
timeout |
float |
300 |
Maximum seconds to wait for API response |
show_progress |
bool |
False |
Display a progress bar |
Returns: A GenerationResult with .bytes (raw video), .frames (lazy-decoded np.ndarray of shape (T, H, W, C)), and .save(path).
Video-to-Video
Transform videos using text prompts via client.video_to_video.generate(). Common use cases include:
- Scene modification — Add or change objects on a workspace surface
- Lighting conditions — Simulate different lighting environments
- Material appearance — Change object textures, colors, or materials
Basic Usage
result = client.video_to_video.generate(
video="rollout.mp4",
prompt="Make the lighting in the scene dark as if the power was out in the warehouse.",
show_progress=True,
)
result.save("outputs/transformed.mp4")
frames = result.frames # np.ndarray of shape (T, H, W, C)
Input Formats
The video parameter accepts multiple formats:
| Format | Example |
|---|---|
| URI | "https://example.com/video.mp4" |
| Local file | "video.mp4" or Path("video.mp4") |
| Numpy array | Single (T, H, W, C) array |
| Frame list | List of (H, W, C) arrays |
Style References
Guide the transformation with a reference image:
result = client.video_to_video.generate(
video="rollout.mp4",
prompt="Transform the scene to match the reference lighting",
references=[Image.open("reference.jpg")],
)
result.save("outputs/transformed.mp4")
API Reference
client.video_to_video.generate()
| Parameter | Type | Default | Description |
|---|---|---|---|
video |
str | Path | np.ndarray | list[np.ndarray] |
required | Input video |
prompt |
str |
required | Text description of desired transformation (1–1000 chars) |
ratio |
str | None |
None |
Output aspect ratio. Auto-selects from input dimensions if None. Options: "1280:720", "720:1280", "1104:832", "960:960", "832:1104", "1584:672", "848:480", "640:480" |
seed |
int | None |
None |
Random seed for reproducibility |
references |
list | None |
None |
Up to 1 reference image for style guidance |
timeout |
float |
300 |
Maximum seconds to wait for API response |
show_progress |
bool |
False |
Display a progress bar |
Returns: A GenerationResult with .bytes (raw video), .frames (lazy-decoded np.ndarray of shape (T, H, W, C)), and .save(path).
Error Handling
The SDK raises the following exceptions:
| Exception | When |
|---|---|
ValueError |
Invalid inputs (bad aspect ratio, too many references, empty actions) |
GenerationError |
A generation task failed, timed out, or the output could not be downloaded |
AuthenticationError |
Invalid API key (HTTP 401) |
RateLimitError |
Too many requests (HTTP 429) |
APIStatusError |
Any other API HTTP error (4xx/5xx) — has .status_code and .body |
APIConnectionError |
Network connectivity failure |
All exceptions can be imported directly from runway_robotics_sdk:
from runway_robotics_sdk import GenerationError, AuthenticationError
try:
obs = world.step(actions)
except GenerationError as e:
print(f"Simulation failed: {e}")
except AuthenticationError:
print("Check your API key")
Examples
uv run python examples/world_model.py # World model action rollout (multi-view)
uv run python examples/image_to_video.py # Generate rollout video from an image
uv run python examples/video_to_video.py # Transform a rollout video with a text prompt
Interactive Web Demo
Launch the Gradio-based demo for interactive world model simulation:
pip install runway-robotics-sdk[demo]
uv run python demo/app.py
Open http://localhost:7880 to upload camera frames, provide action sequences, and generate predicted trajectories.
Contributing
Clone the repository and install development dependencies:
git clone https://github.com/runwayml/robotics-sdk-python.git
cd robotics-sdk-python
uv sync --all-extras
uv run lefthook install
Git hooks (via Lefthook) run automatically:
- Pre-commit —
ruff formatandruff check --fixon staged files (auto-fixes and re-stages) - Pre-push —
mypytype checking andpytestwith coverage
You can also run checks manually:
uv run ruff format . # Format code
uv run ruff check --fix . # Lint (with auto-fix)
uv run mypy runway_robotics_sdk/ demo/ examples/ # Type check
uv run pytest # Run tests
All checks are also enforced via GitHub Actions CI.
Data Attribution
Example data in this repository uses the DROID (Distributed Robot Interaction Dataset), licensed under CC BY 4.0. See CITATION.md for full citation details.
License
This project is licensed under the Apache License 2.0. See LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file runway_robotics_sdk-0.1.0.tar.gz.
File metadata
- Download URL: runway_robotics_sdk-0.1.0.tar.gz
- Upload date:
- Size: 19.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f861ec88bd81c5934765d64ad296918b02126e394fb65723c6954a1da24cb8ab
|
|
| MD5 |
d1b7e2ade778ba9115de43a0b22a5fd4
|
|
| BLAKE2b-256 |
5b37b1c5086cd0ab962207dc50584cf380bfc46eeb14eda519fc7c2f9df89837
|
Provenance
The following attestation bundles were made for runway_robotics_sdk-0.1.0.tar.gz:
Publisher:
publish.yml on runwayml/robotics-sdk-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
runway_robotics_sdk-0.1.0.tar.gz -
Subject digest:
f861ec88bd81c5934765d64ad296918b02126e394fb65723c6954a1da24cb8ab - Sigstore transparency entry: 1051714631
- Sigstore integration time:
-
Permalink:
runwayml/robotics-sdk-python@9f4fdbe534cffb8e44e4eab1f7650ff11edec190 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/runwayml
-
Access:
internal
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9f4fdbe534cffb8e44e4eab1f7650ff11edec190 -
Trigger Event:
release
-
Statement type:
File details
Details for the file runway_robotics_sdk-0.1.0-py3-none-any.whl.
File metadata
- Download URL: runway_robotics_sdk-0.1.0-py3-none-any.whl
- Upload date:
- Size: 23.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d5c9d6004c1386baf2e7711512be49916a221b3c2e5e82c2818c0fae53940d62
|
|
| MD5 |
fde3b27640973ac27dd6cf09a72072bd
|
|
| BLAKE2b-256 |
e568a4770ed17f219e1fe6685a259c3bb913fd95d45c9294fa7b22c673126312
|
Provenance
The following attestation bundles were made for runway_robotics_sdk-0.1.0-py3-none-any.whl:
Publisher:
publish.yml on runwayml/robotics-sdk-python
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
runway_robotics_sdk-0.1.0-py3-none-any.whl -
Subject digest:
d5c9d6004c1386baf2e7711512be49916a221b3c2e5e82c2818c0fae53940d62 - Sigstore transparency entry: 1051714646
- Sigstore integration time:
-
Permalink:
runwayml/robotics-sdk-python@9f4fdbe534cffb8e44e4eab1f7650ff11edec190 -
Branch / Tag:
refs/tags/v0.1.0 - Owner: https://github.com/runwayml
-
Access:
internal
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@9f4fdbe534cffb8e44e4eab1f7650ff11edec190 -
Trigger Event:
release
-
Statement type: