Skip to main content

Roboreason package

Project description

RoboReason

RoboReason is a python package that makes it easy to apply any reward model or video-language reasoning model to your robot videos.

Supported Models

ToDos

  • Enable fine-tuning of reward models on custom datasets

📦 File Structure

roboreason/
├── roboreason/         # Main package
│   ├── robometer/         # Robometer code
│   ├── sole.py            # SOLE-R1 code
│   ├── roboreward.py      # RoboReward code
│   ├── topreward.py       # TOPReward code
│   └── api_models.py      # OpenAI and Gemini APIs
├── test_videos/        # Example videos to test
├── model_outputs/      # Videos showing model outputs
├── lerobot_examples/   # Examples showing integration with lerobot datasets
└── pyproject.toml      # Dependencies (uv)

Install

Option 1: quick pip install

pip install -U roboreason

Option 2: use uv for dependency management

1. Clone the repository:

git clone https://github.com/philipmit/roboreason

2. Install uv

pip install uv

3. Sync environment

uv sync

4. Activate environment

source .venv/bin/activate

Download model checkpoints

# SOLE-R1 (8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('sole')"

# Robometer (4B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('robometer')"

# TOPReward (based on Qwen3-VL-8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('topreward')"

# RoboReward (8B)
python -c "from roboreason.utils.model_utils import get_model_dir; get_model_dir('roboreward')"

Quick start: Example reward generation and plotting

# pip install -U roboreason
import roboreason as rr

video_paths = ['test_videos/robosuite/robosuite_lift_example_00.mp4']
task_description="Pick up the cube from the table."

# Robometer
rewards, success_probs = rr.generate(model="robometer",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external'])
output_robometer = {"model": "robometer", "rewards": rewards[0]}

# SOLE-R1
rewards, reasoning_traces = rr.generate(model="sole-r1",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external and wrist'])
output_sole = {"model": "sole-r1", "rewards": rewards[0], "reasoning_traces": reasoning_traces[0]}

rr.video_plot(outputs=[output_sole, output_robometer], plot_save_path='model_outputs/combined/robosuite_lift_example_00.mp4', video_path = video_paths[0])

Examples for generating across all models

Robometer

import roboreason as rr

rewards, success_probs = rr.generate(
    model="robometer",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

SOLE-R1

import roboreason as rr

rewards, reasoning_traces = rr.generate(
    model="sole-r1",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external and wrist']
)

TOPReward

import roboreason as rr

rewards = rr.generate(
    model="topreward",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

RoboReward

import roboreason as rr

rewards = rr.generate(
    model="roboreward",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external']
)

GPT-5 (and other OpenAI models)

import roboreason as rr

# requires OpenAI API key: https://developers.openai.com/api/docs/quickstart
API_KEY = "..."

rewards, reasoning_traces = rr.generate(
    model="gpt-5",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external'], 
    key=API_KEY
)

Gemini-3-Pro (and other Google models)

import roboreason as rr

# requires Gemini API key: https://ai.google.dev/gemini-api/docs/api-key
API_KEY = "..."

rewards, reasoning_traces = rr.generate(
    model="gemini-3-pro-preview",  
    task_description="Pick up the cube from the table.", 
    video_paths=['test_videos/robosuite/robosuite_lift_example_00.mp4'], 
    view_type_per_video=['external'], 
    key=API_KEY
)

Video plotting

import roboreason as rr

# Robometer
rewards, success_probs = rr.generate(model="robometer",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external'])
output_robometer = {"model": "robometer", "rewards": rewards[0]}

# SOLE-R1
rewards, reasoning_traces = rr.generate(model="sole-r1",  task_description=task_description, video_paths=video_paths, view_type_per_video=['external and wrist'])
output_sole = {"model": "sole-r1", "rewards": rewards[0], "reasoning_traces": reasoning_traces[0]}

rr.video_plot(
    outputs=[output_sole, output_robometer], 
    plot_save_path='model_outputs/combined/robosuite_lift_example_00.mp4', 
    video_path = 'test_videos/robosuite/robosuite_lift_example_00.mp4'
)

rr.generate

Argument Type Required Description
model str Name of the model to use. Options include: "robometer", "sole-r1", "topreward", "roboreward", OpenAI models (e.g."gpt-5"), Google models (e.g., "gemini-3-pro-preview")
task_description str Natural language description of the task the robot is performing.
video_paths List[str] List of paths to input video files.
view_type_per_video List[str] List specifying the camera view(s) used for reward reasoning for each video (e.g., "external", "wrist", or "external and wrist").
key str API key required for external models (e.g., OpenAI or Gemini). Not needed for local models.
Model Type Return Values
SOLE-R1 / GPT / Gemini rewards, reasoning_traces
Robometer rewards, success_probs
TOPReward / RoboReward rewards

rr.video_plot

Argument Type Required Description
outputs List[dict] ❌* List of model outputs (e.g., from rr.generate) to visualize together.
plot_save_path str Path where the output video with overlays will be saved.
video_path str Path to the original video file being visualized.
view_type str View type used for visualization (e.g., "external", "wrist", "external and wrist").
show_reasoning_traces bool Whether to overlay reasoning traces on the video. Default: False.
show_all_frames bool Whether to render all frames instead of sampled frames. Default: False.
model str ❌** Model name (used when calling video_plot directly instead of passing outputs).
task_description str ❌** Task description (used in direct-call mode).
video_paths List[str] ❌** Input videos (used in direct-call mode).
view_type_per_video List[str] ❌** View types per video (used in direct-call mode).
key str ❌** API key (if required for model).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

roboreason-0.1.1.tar.gz (666.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

roboreason-0.1.1-py3-none-any.whl (747.0 kB view details)

Uploaded Python 3

File details

Details for the file roboreason-0.1.1.tar.gz.

File metadata

  • Download URL: roboreason-0.1.1.tar.gz
  • Upload date:
  • Size: 666.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for roboreason-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e033731db2ba139d140941ef34b2a9f505c895f45bcccb094db1a91f6a65ebac
MD5 790c0e65410599e17498afd90cd3fcc9
BLAKE2b-256 77ef6bebe964f31b9aa3f334c43a5df07ab20f63ecb2d4de7130c48e2a76ed91

See more details on using hashes here.

File details

Details for the file roboreason-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: roboreason-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 747.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for roboreason-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5fc452fbc93edbf4ae21f3ab62ef32db9b12cd042d0e83371b969d676c025dda
MD5 93ed338b86e31e28f2419f03af78e8eb
BLAKE2b-256 bd73432c0ba1f7ad423c5cf3d4930676a3294579c6b77bafbca2880d69dd8015

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page