Skip to main content

A unified hub for reward models in AI alignment

Project description

RewardHub

RewardHub is an end-to-end library for annotating data using state-of-the-art (SoTA) reward models, critic functions, and related processes. It is designed to facilitate the generation of preference training data or define acceptance criteria for agentic or inference scaling systems such as Best-of-N sampling or Beam-Search.

Getting Started

Installation

Clone the repository and install the necessary dependencies:

git clone https://github.com/Red-Hat-AI-Innovation-Team/reward_hub.git
cd reward_hub
pip install -e .

Usage Examples

RewardHub supports multiple types of reward models and serving methods. Here are the main ways to use the library:

Process Reward Models (PRM)

PRMs evaluate responses by analyzing the reasoning process:

from reward_hub import AutoRM

# Load a math-focused PRM using HuggingFace backend
model = AutoRM.load("Qwen/Qwen2.5-Math-PRM-7B", load_method="hf", device=0)

# Example conversation
messages = [
    [
        {"role": "user", "content": "What is 2+2?"},
        {"role": "assistant", "content": "Let me solve this step by step:\n1) 2 + 2 = 4\nTherefore, 4"}
    ]
]

# Get scores with full PRM results
results = model.score(messages, return_full_prm_result=True)
# Or just get the scores
scores = model.score(messages, return_full_prm_result=False)

Outcome Reward Models (ORM)

ORMs focus on evaluating the final response quality:

from reward_hub import AutoRM

# Load an ORM using HuggingFace backend
model = AutoRM.load("internlm/internlm2-7b-reward", load_method="hf", device=0)

scores = model.score([
    [
        {"role": "user", "content": "What is 2+2?"},
        {"role": "assistant", "content": "The answer is 4."}
    ]
])

DrSow Reward Model

DrSow uses density ratios between strong and weak models to evaluate responses:

Launch the strong and weak models first.

bash scripts/launch_drsow.sh Qwen/Qwen2.5-32B-instruct Qwen/Qwen2.5-32B

Then, you can launch client reward servers to acces the DrSow reward model.

from reward_hub import AutoRM
from reward_hub.drsow import DrSowConfig

drsow_config = DrSowConfig(
    strong_model_name="Qwen/Qwen2.5-32B-instruct",
    strong_port=8305,
    weak_model_name="Qwen/Qwen2.5-32B",
    weak_port=8306
)

model = AutoRM.load("drsow", load_method="openai", drsow_config=drsow_config)

# Get scores for responses
scores = model.score([
    [
        {"role": "user", "content": "What is 2+2?"},
        {"role": "assistant", "content": "The answer is 4."}
    ]
])

Supported Backends

RewardHub supports multiple serving backends:

  • HuggingFace (load_method="hf"): Direct local model loading
  • VLLM (load_method="vllm"): Optimized local serving
  • OpenAI API (load_method="openai"): Remote API access

Supported Models

We support various reward models including:

Model Type HuggingFace VLLM OpenAI
Qwen/Qwen2.5-Math-PRM-7B PRM
internlm/internlm2-7b-reward ORM
RLHFlow/Llama3.1-8B-PRM-Deepseek-Data PRM
RLHFlow/ArmoRM-Llama3-8B-v0.1 ORM
drsow ORM

Research

RewardHub serves as the official implementation of the paper:
Dr. SoW: Density Ratio of Strong-over-weak LLMs for Reducing the Cost of Human Annotation in Preference Tuning

The paper introduces CDR, a novel approach to generating high-quality preference annotations using density ratios tailored to domain-specific needs.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

reward_hub-0.1.0.tar.gz (22.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

reward_hub-0.1.0-py2.py3-none-any.whl (19.4 kB view details)

Uploaded Python 2Python 3

File details

Details for the file reward_hub-0.1.0.tar.gz.

File metadata

  • Download URL: reward_hub-0.1.0.tar.gz
  • Upload date:
  • Size: 22.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for reward_hub-0.1.0.tar.gz
Algorithm Hash digest
SHA256 447e843ae7702f20b9d8c60af04d2c519d6a315bec674851ccb386209fcb7582
MD5 3f5e9ef326277c639f1af6ea5a1653a3
BLAKE2b-256 71eb33491b119990aca168a994d9ff557ff34e6b063c28823de60dde85613677

See more details on using hashes here.

Provenance

The following attestation bundles were made for reward_hub-0.1.0.tar.gz:

Publisher: pypi.yaml on Red-Hat-AI-Innovation-Team/reward_hub

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file reward_hub-0.1.0-py2.py3-none-any.whl.

File metadata

  • Download URL: reward_hub-0.1.0-py2.py3-none-any.whl
  • Upload date:
  • Size: 19.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for reward_hub-0.1.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1a135e0bee47b585cbd9e17a683a90cfcedf75d5d22301fd9d42a68429e5dd52
MD5 dacb8c6832e5964db39ae6375948edbe
BLAKE2b-256 66e7a5a4224b3108de5163be33b6aaa3012252128ae7fee727f305959f6a5159

See more details on using hashes here.

Provenance

The following attestation bundles were made for reward_hub-0.1.0-py2.py3-none-any.whl:

Publisher: pypi.yaml on Red-Hat-AI-Innovation-Team/reward_hub

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page