Skip to main content

Catch reward traps before training. Named after Goodhart's Law.

Project description

goodhart

CI Python 3.9+ License

Paper: Catching Goodhart's Law Before Training: Static Reward Analysis with Formal Guarantees (Sheridan, 2026)

"When a measure becomes a target, it ceases to be a good measure." -- Charles Goodhart (1975), generalized by Marilyn Strathern (1997)

Catch reward traps before training. Goodhart runs 44 composable analysis rules on your RL reward configuration and reports degenerate equilibria, perverse incentives, and exploitable reward structures -- before you spend compute. 24 rules are backed by machine-verified LEAN 4 proofs (zero sorry), including formalizations of Ng 1999 and Skalse 2022.

Installation

pip install goodhart

# Or install from source
pip install git+https://github.com/audieleon/goodhart.git

# Optional: visualization and Gymnasium auto-detection
pip install goodhart[all]

Quick Start

# Check a sparse reward config
goodhart --goal 1.0 --penalty -0.01 --steps 500
# -> CRITICAL: death beats survival by 9.6x

# Try an example from a published paper
goodhart --example coast_runners
# -> CRITICAL: loop EV (+800) beats goal (+100)

# List all available examples
goodhart --examples

# Interactive mode (asks questions)
goodhart

Usage

CLI

# Quick check with training params
goodhart --goal 1.0 --penalty -0.001 --steps 400 --gamma 0.999 \
  --actors 64 --budget 10000000 --lr 1e-4 --specialists 3 --floor 0.10

# From a config file (YAML, JSON, or TOML)
goodhart --config my_experiment.yaml

# From an annotated Python reward function
goodhart --check my_env.py:compute_reward

# With educational explanations
goodhart --example humanoid_idle --verbose

# Deep-dive on a specific rule
goodhart --explain idle_exploit

# Diagnose and suggest fixes
goodhart --doctor --goal 1.0 --penalty -0.01 --steps 500

# Machine-readable doctor output
goodhart --doctor --goal 1.0 --penalty -0.01 --steps 500 -j

# Field reference
goodhart --fields                    # list all fields
goodhart --field intentional         # explain one field

# CI integration
goodhart -q --config experiment.yaml            # exit 1 on criticals
goodhart -sq --config experiment.yaml           # exit 1 on warnings too
goodhart -sq --ignore idle_exploit --config e.yaml  # suppress known-OK warnings

# Grep-friendly output
goodhart --format compact --config experiment.yaml | grep CRITICAL

# Read config from stdin
cat reward.yaml | goodhart --config - -j | jq '.criticals'

Python API

# Quick check (prints report, returns bool)
from goodhart import check
passed = check(goal=1.0, penalty=-0.01, max_steps=500)  # False if criticals

# Programmatic analysis (no printing, returns typed Result)
from goodhart import analyze
result = analyze(goal=1.0, penalty=-0.01, max_steps=500, gamma=0.999)
print(result.passed)       # True/False
print(result.criticals)    # list of Verdict objects
print(result.to_dict())    # JSON-serializable dict

Decorator (annotate a Python reward function)

from goodhart import reward_function, RewardSource, RewardType

ALIVE_BONUS = 1.0
VELOCITY_SCALE = 0.5
CTRL_COST = -0.001

@reward_function(
    max_steps=1000, gamma=0.99, n_actions=8, action_type="continuous",
    sources=[
        RewardSource("alive", RewardType.PER_STEP, ALIVE_BONUS,
                     requires_action=False, intentional=True),
        RewardSource("velocity", RewardType.PER_STEP, VELOCITY_SCALE,
                     intentional=True, state_dependent=True),
        RewardSource("ctrl", RewardType.PER_STEP, CTRL_COST,
                     requires_action=True),
    ],
)
def compute_reward(obs, action, info):
    return ALIVE_BONUS + obs["velocity"] * VELOCITY_SCALE + CTRL_COST * sum(a**2 for a in action)

# The function works normally AND carries analysis metadata
compute_reward(obs, action, info)        # returns reward
compute_reward.goodhart_check()          # prints full report
assert compute_reward.goodhart_passed()  # CI gate

Constants are defined once and shared between the decorator and the function body -- no duplication, no drift.

AI Assistant (Claude Code, Cursor)

If you use an AI coding assistant, goodhart can run automatically when you discuss reward design. Add to your MCP config (one-time setup):

{
  "mcpServers": {
    "goodhart": {
      "command": "python",
      "args": ["-m", "goodhart.mcp_server"]
    }
  }
}

Claude Code: add to ~/.claude/settings.json Cursor: add to .cursor/mcp.json

Then just describe your reward in conversation — the assistant calls goodhart_check automatically and explains the findings. Tools available: check, doctor, explain rules, and browse examples.

YAML Configuration

# my_experiment.yaml
environment:
  name: "MiniHack-Navigation"
  max_steps: 500
  gamma: 0.999
  reward_sources:
    - name: goal
      type: terminal
      value: 1.0
      discovery_probability: 0.05
    - name: step penalty
      type: per_step
      value: -0.001

training:
  algorithm: APPO
  lr: 0.0002
  entropy_coeff: 0.0001
  num_envs: 256
  total_steps: 10000000

Examples

66 built-in examples from published papers, plus 146 evaluation entries from the Reward Failure Dataset:

goodhart --examples              # list all examples
goodhart --example coast_runners  # run CoastRunners (loop exploit)
goodhart --example humanoid_idle  # run Humanoid (idle exploit)
goodhart --fields                 # explain RewardSource fields

Rules

44 composable rules in four categories:

goodhart --rules      # list all with descriptions
goodhart --explain X  # deep-dive on rule X
  • 19 reward rules: penalty dominance, death incentive, idle exploit, exploration threshold, respawning exploit, death reset, shaping loops, shaping safety (Ng 1999), proxy hackability (Skalse 2022), intrinsic sufficiency, budget sufficiency, compound traps, staged plateaus, reward dominance, exponential saturation, intrinsic dominance, discount horizon mismatch, negative-only reward, reward delay horizon
  • 13 training rules: learning rate regime (all algorithms), critic LR ratio, entropy regime, clip fraction risk (PPO), expert collapse, batch size interaction, parallelism effect, memory capacity, replay buffer ratio (off-policy), target network update (DQN), epsilon schedule (DQN), soft update rate (SAC/DDPG/TD3), SAC alpha
  • 4 architecture rules: embedding capacity, routing floor necessity, recurrence type, actor count effect
  • 8 blind-spot advisories: pattern-based hints about failure modes static analysis cannot detect (physics exploits, goal misgeneralization, credit assignment depth, constrained RL, non-stationarity, learned rewards, missing constraints, aggregation traps)

Reward structure rules (19) are algorithm-agnostic — they analyze the MDP reward regardless of training algorithm. Training rules (13) cover PPO, APPO, DQN, SAC, DDPG, TD3, IMPALA, and A2C with algorithm-specific thresholds and checks.

What it catches vs. what it can't

Catches (from configuration alone):

  • Degenerate equilibria (standing still, dying fast)
  • Respawning reward loops (CoastRunners, YouTube watch time)
  • Death-as-reset exploits (Road Runner level replay)
  • Shaping reward cycles vs. potential-based shaping (Ng 1999)
  • Reward deserts (no gradient signal, e.g., Mountain Car)
  • Proxy reward hackability (Skalse 2022)
  • Expert collapse, entropy issues, budget insufficiency

Cannot catch (emits advisory hints when config patterns match):

  • Physics engine exploits (box surfing, leg hooking)
  • Goal misgeneralization (CoinRun "go right")
  • Learned reward model gaming (RLHF overoptimization)
  • Missing reward terms (tokamak coil balance)
  • Non-stationarity in self-play
  • Episode-level aggregation traps (Sharpe ratio)

Examples

66 built-in examples from published papers (1983-2025), plus the Reward Failure Dataset with 212 entries from 133 papers:

goodhart --examples              # list all
goodhart --example coast_runners # run one

Examples include documented failures (CoastRunners, Humanoid, Mountain Car), positive design patterns (Pendulum, CartPole, Breakout), industrial applications (YouTube, data center cooling, tokamak plasma, sepsis treatment), and honest limitation cases showing what static analysis cannot detect.

Formal Proofs

24 rules link to machine-verified LEAN 4 theorems (103 theorems, zero sorry). Each link has a strength level:

  • VERIFIED (13 rules): The Python check is a direct instance of the theorem.
  • GROUNDED (7 rules): The theorem proves the core. Python extends with discounting and thresholds.
  • MOTIVATED (4 rules): The theorem proves WHY the issue matters. Python checks a structural heuristic.

Key formalizations:

  • Ng 1999 Theorem 1: Potential-based reward shaping preserves V* (sufficiency, necessity, general policy version, undiscounted extension). Full MDP with Bellman contraction via Banach fixed point theorem.
  • Skalse 2022 Theorems 1-3: Hackability impossibility on open sets, existence of unhackable pairs, simplification characterization. Includes a machine-verified proof that Theorem 2's non-trivial witness construction requires |Pi| >= 3; for |Pi| = 2 only trivial witnesses exist (documented edge case, see proofs/GoodhartProofs/Skalse.lean).
cd proofs
lake build  # requires LEAN 4 + Mathlib
# Should complete with zero sorry, zero errors

Auto-Detection

Automatically detect reward structure from a Gymnasium environment:

pip install goodhart[detect]
goodhart --detect CartPole-v1
goodhart --detect MountainCar-v0

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

goodhart-1.0.0.tar.gz (176.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

goodhart-1.0.0-py3-none-any.whl (217.5 kB view details)

Uploaded Python 3

File details

Details for the file goodhart-1.0.0.tar.gz.

File metadata

  • Download URL: goodhart-1.0.0.tar.gz
  • Upload date:
  • Size: 176.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for goodhart-1.0.0.tar.gz
Algorithm Hash digest
SHA256 e3049b95598ee90ae06ed64bcb87beda6daf24c9e82b33f9f249797476296f5b
MD5 c91f7bef6c5482f23e6482a7d2911124
BLAKE2b-256 c0238fc42bf8486e4b623043a06110a82291a09304dcf234d8b1c4b6f8d8e9ac

See more details on using hashes here.

File details

Details for the file goodhart-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: goodhart-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 217.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.2

File hashes

Hashes for goodhart-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 69666d74edb6b6f9895549cbcc8586f6fa4d969074c146eab9ffd895ec678389
MD5 86469e774ad57a8984481f9633a41a93
BLAKE2b-256 415171a4ec47d7e29cf9b168a55dc1f0f5503abc53e6a957b54450ad7f307f6f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page