Skip to main content

Roboreason package

Project description

RoboReason

RoboReason is a python package that makes it easy to apply any reward model or video-language reasoning model to your robot videos.

Supported Models

ToDos

  • Enable fine-tuning of reward models on custom datasets

📦 File Structure

roboreason/
├── roboreason/         # Main package
│   ├── robometer/         # Robometer code
│   ├── sole.py            # SOLE-R1 code
│   ├── roboreward.py      # RoboReward code
│   ├── topreward.py       # TOPReward code
│   └── api_models.py      # OpenAI and Gemini APIs
├── test_videos/        # Example videos to test
├── model_outputs/      # Videos showing model outputs
├── lerobot_examples/   # Examples showing integration with lerobot datasets
└── pyproject.toml      # Dependencies (uv)

Install

Option 1: quick pip install

pip install -U roboreason

Option 2: use uv for dependency management

1. Clone the repository:

git clone https://github.com/philipmit/roboreason

2. Install uv

pip install uv

3. Sync environment

uv sync

4. Activate environment

source .venv/bin/activate

Download model checkpoints

# SOLE-R1 (8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('sole')"

# Robometer (4B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('robometer')"

# TOPReward (based on Qwen3-VL-8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('topreward')"

# RoboReward (8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('roboreward')"

Quick start: Example reward generation and plotting

# pip install -U roboreason
import roboreason as rr

video_paths = ['test_videos/robosuite/robosuite_lift_example_00.mp4']
task_description="Pick up the cube from the table."

# Robometer
rewards, success_probs = rr.generate(model="robometer",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external'])
output_robometer = {"model": "robometer", "rewards": rewards[0]}

# SOLE-R1
rewards, reasoning_traces = rr.generate(model="sole-r1",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external and wrist'])
output_sole = {"model": "sole-r1", "rewards": rewards[0], "reasoning_traces": reasoning_traces[0]}

rr.video_plot(outputs=[output_sole, output_robometer], plot_save_path='model_outputs/combined/robosuite_lift_example_00.mp4', video_path = video_paths[0])

Examples for generating across all models

Robometer

import roboreason as rr

rewards, success_probs = rr.generate(
    model="robometer",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

SOLE-R1

import roboreason as rr

rewards, reasoning_traces = rr.generate(
    model="sole-r1",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external and wrist']
)

TOPReward

import roboreason as rr

rewards = rr.generate(
    model="topreward",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

RoboReward

import roboreason as rr

rewards = rr.generate(
    model="roboreward",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

GPT-5 (and other OpenAI models)

import roboreason as rr

# requires OpenAI API key: https://developers.openai.com/api/docs/quickstart
API_KEY = "..."

rewards, reasoning_traces = rr.generate(
    model="gpt-5",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external'], 
    key=API_KEY
)

Gemini-3-Pro (and other Google models)

import roboreason as rr

# requires Gemini API key: https://ai.google.dev/gemini-api/docs/api-key
API_KEY = "..."

rewards, reasoning_traces = rr.generate(
    model="gemini-3-pro-preview",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external'], 
    key=API_KEY
)

Video plotting

import roboreason as rr

# Robometer
rewards, success_probs = rr.generate(model="robometer",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external'])
output_robometer = {"model": "robometer", "rewards": rewards[0]}

# SOLE-R1
rewards, reasoning_traces = rr.generate(model="sole-r1",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external and wrist'])
output_sole = {"model": "sole-r1", "rewards": rewards[0], "reasoning_traces": reasoning_traces[0]}

rr.video_plot(
    outputs=[output_sole, output_robometer], 
    plot_save_path='model_outputs/combined/robosuite_lift_example_00.mp4', 
    video_path = 'test_videos/robosuite/robosuite_lift_example_00.mp4'
)

rr.generate

Argument Type Required Description
model str Name of the model to use. Options include: "robometer", "sole-r1", "topreward", "roboreward", OpenAI models (e.g."gpt-5"), Google models (e.g., "gemini-3-pro-preview")
task_description str Natural language description of the task the robot is performing.
video_paths List[str] List of paths to input video files.
view_type_per_video List[str] List specifying the camera view(s) used for reward reasoning for each video (e.g., "external", "wrist", or "external and wrist").
key str API key required for external models (e.g., OpenAI or Gemini). Not needed for local models.
Model Type Return Values
SOLE-R1 / GPT / Gemini rewards, reasoning_traces
Robometer rewards, success_probs
TOPReward / RoboReward rewards

rr.video_plot

Argument Type Required Description
outputs List[dict] ❌* List of model outputs (e.g., from rr.generate) to visualize together.
plot_save_path str Path where the output video with overlays will be saved.
video_path str Path to the original video file being visualized.
view_type str View type used for visualization (e.g., "external", "wrist", "external and wrist").
show_reasoning_traces bool Whether to overlay reasoning traces on the video. Default: False.
show_all_frames bool Whether to render all frames instead of sampled frames. Default: False.
model str ❌** Model name (used when calling video_plot directly instead of passing outputs).
task_description str ❌** Task description (used in direct-call mode).
video_paths List[str] ❌** Input videos (used in direct-call mode).
view_type_per_video List[str] ❌** View types per video (used in direct-call mode).
key str ❌** API key (if required for model).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

roboreason-0.1.3.5.tar.gz (667.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

roboreason-0.1.3.5-py3-none-any.whl (747.5 kB view details)

Uploaded Python 3

File details

Details for the file roboreason-0.1.3.5.tar.gz.

File metadata

  • Download URL: roboreason-0.1.3.5.tar.gz
  • Upload date:
  • Size: 667.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for roboreason-0.1.3.5.tar.gz
Algorithm Hash digest
SHA256 5f11790d9e05c25e364495a798b4a0f64a081b497459cb55b4985a9689e8e4d8
MD5 19fc533176e628691c333340405b67c7
BLAKE2b-256 6be93cdb735b591cd9ab2d89cc4532a00cd8f923473dc3ffeae56084d9b0a7b8

See more details on using hashes here.

File details

Details for the file roboreason-0.1.3.5-py3-none-any.whl.

File metadata

  • Download URL: roboreason-0.1.3.5-py3-none-any.whl
  • Upload date:
  • Size: 747.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for roboreason-0.1.3.5-py3-none-any.whl
Algorithm Hash digest
SHA256 952f07e008bc891b96fa41b320a192ea3b0392d197e833ef24f628060da0a35f
MD5 4680afcfcb25808febd117ce4c608fa2
BLAKE2b-256 d0406965c9a593b26f38a9541286e7c37f7b731326b13e4a911a5d3233bbd963

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page