Skip to main content

Roboreason package

Project description

RoboReason

RoboReason is a python package that makes it easy to apply any reward model or video-language reasoning model to your robot videos.

Supported Models

ToDos

  • Enable fine-tuning of reward models on custom datasets

📦 File Structure

roboreason/
├── roboreason/         # Main package
│   ├── robometer/         # Robometer code
│   ├── sole.py            # SOLE-R1 code
│   ├── roboreward.py      # RoboReward code
│   ├── topreward.py       # TOPReward code
│   └── api_models.py      # OpenAI and Gemini APIs
├── test_videos/        # Example videos to test
├── model_outputs/      # Videos showing model outputs
├── lerobot_examples/   # Examples showing integration with lerobot datasets
└── pyproject.toml      # Dependencies (uv)

Install

Option 1: quick pip install

pip install -U roboreason

Option 2: use uv for dependency management

1. Clone the repository:

git clone https://github.com/philipmit/roboreason

2. Install uv

pip install uv

3. Sync environment

uv sync

4. Activate environment

source .venv/bin/activate

Download model checkpoints

# SOLE-R1 (8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('sole')"

# Robometer (4B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('robometer')"

# TOPReward (based on Qwen3-VL-8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('topreward')"

# RoboReward (8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('roboreward')"

Quick start: Example reward generation and plotting

# pip install -U roboreason
import roboreason as rr

video_paths = ['test_videos/robosuite/robosuite_lift_example_00.mp4']
task_description="Pick up the cube from the table."

# Robometer
rewards, success_probs = rr.generate(model="robometer",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external'])
output_robometer = {"model": "robometer", "rewards": rewards[0]}

# SOLE-R1
rewards, reasoning_traces = rr.generate(model="sole-r1",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external and wrist'])
output_sole = {"model": "sole-r1", "rewards": rewards[0], "reasoning_traces": reasoning_traces[0]}

rr.video_plot(outputs=[output_sole, output_robometer], plot_save_path='model_outputs/combined/robosuite_lift_example_00.mp4', video_path = video_paths[0])

Examples for generating across all models

Robometer

import roboreason as rr

rewards, success_probs = rr.generate(
    model="robometer",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

SOLE-R1

import roboreason as rr

rewards, reasoning_traces = rr.generate(
    model="sole-r1",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external and wrist']
)

TOPReward

import roboreason as rr

rewards = rr.generate(
    model="topreward",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

RoboReward

import roboreason as rr

rewards = rr.generate(
    model="roboreward",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

GPT-5 (and other OpenAI models)

import roboreason as rr

# requires OpenAI API key: https://developers.openai.com/api/docs/quickstart
API_KEY = "..."

rewards, reasoning_traces = rr.generate(
    model="gpt-5",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external'], 
    key=API_KEY
)

Gemini-3-Pro (and other Google models)

import roboreason as rr

# requires Gemini API key: https://ai.google.dev/gemini-api/docs/api-key
API_KEY = "..."

rewards, reasoning_traces = rr.generate(
    model="gemini-3-pro-preview",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external'], 
    key=API_KEY
)

Video plotting

import roboreason as rr

# Robometer
rewards, success_probs = rr.generate(model="robometer",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external'])
output_robometer = {"model": "robometer", "rewards": rewards[0]}

# SOLE-R1
rewards, reasoning_traces = rr.generate(model="sole-r1",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external and wrist'])
output_sole = {"model": "sole-r1", "rewards": rewards[0], "reasoning_traces": reasoning_traces[0]}

rr.video_plot(
    outputs=[output_sole, output_robometer], 
    plot_save_path='model_outputs/combined/robosuite_lift_example_00.mp4', 
    video_path = 'test_videos/robosuite/robosuite_lift_example_00.mp4'
)

rr.generate

Argument Type Required Description
model str Name of the model to use. Options include: "robometer", "sole-r1", "topreward", "roboreward", OpenAI models (e.g."gpt-5"), Google models (e.g., "gemini-3-pro-preview")
task_description str Natural language description of the task the robot is performing.
video_paths List[str] List of paths to input video files.
view_type_per_video List[str] List specifying the camera view(s) used for reward reasoning for each video (e.g., "external", "wrist", or "external and wrist").
key str API key required for external models (e.g., OpenAI or Gemini). Not needed for local models.
Model Type Return Values
SOLE-R1 / GPT / Gemini rewards, reasoning_traces
Robometer rewards, success_probs
TOPReward / RoboReward rewards

rr.video_plot

Argument Type Required Description
outputs List[dict] ❌* List of model outputs (e.g., from rr.generate) to visualize together.
plot_save_path str Path where the output video with overlays will be saved.
video_path str Path to the original video file being visualized.
view_type str View type used for visualization (e.g., "external", "wrist", "external and wrist").
show_reasoning_traces bool Whether to overlay reasoning traces on the video. Default: False.
show_all_frames bool Whether to render all frames instead of sampled frames. Default: False.
model str ❌** Model name (used when calling video_plot directly instead of passing outputs).
task_description str ❌** Task description (used in direct-call mode).
video_paths List[str] ❌** Input videos (used in direct-call mode).
view_type_per_video List[str] ❌** View types per video (used in direct-call mode).
key str ❌** API key (if required for model).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

roboreason-0.1.2.tar.gz (666.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

roboreason-0.1.2-py3-none-any.whl (747.0 kB view details)

Uploaded Python 3

File details

Details for the file roboreason-0.1.2.tar.gz.

File metadata

  • Download URL: roboreason-0.1.2.tar.gz
  • Upload date:
  • Size: 666.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for roboreason-0.1.2.tar.gz
Algorithm Hash digest
SHA256 be64866068c43cd97d10cb3e81da758131f0a0d090f7befc1eaaa06197ded7eb
MD5 95388858e5f59137b5880735f8016ac3
BLAKE2b-256 599064741d10df6124fc06ff05f1134878c2c29d5553418776e6965c3a85f8ad

See more details on using hashes here.

File details

Details for the file roboreason-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: roboreason-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 747.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for roboreason-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 a9a7b99676a3ceeacce4cd859488549970c3e3f04106f4edb57372e828fe8e88
MD5 7daa10d963055155d68abf33f9e6ee6c
BLAKE2b-256 dc9fbb4e21b8e455477ffaead4be4454290a8478299a33a033494cf85d420f77

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page