Skip to main content

War Games measures whether frontier models can sustain long-horizon planning and adaptation in complex, non-stationary real-time games, with human performance as the benchmark.

Project description

WarGames

WarGames turns OpenRA Red Alert into a computer-use environment for agentic AI. An agent receives pixels and a small CUA tool set, then sends mouse/keyboard/wait actions back to the simulator.

The runtime never calls an LLM and never trains a model. It does three things: capture frames, apply tool calls, and compute rewards from private simulator state. Your agent or external harness owns model calls. Prime/prime-rl owns gradient updates.

Example Output

This is a short Kimi K2.5 smoke run. The agent receives screenshots, chooses CUA actions, and WarGames applies them to the live OpenRA window.

Kimi K2.5 Red Alert smoke run

Install

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Red Alert needs a working OpenRA checkout:

export LAYERBRAIN_WARGAMES_REDALERT_OPENRA_ROOT=/path/to/openra-source
export LAYERBRAIN_WARGAMES_REDALERT_OPENRA_BINARY=/path/to/openra-source/launch-game.sh

Local Secrets

Create local.env from the template. local.env is gitignored.

cp local.env.example local.env

Use provider-standard names for model keys:

OPENAI_API_KEY=
OPENAI_BASE_URL=
OPENAI_MODEL=
ANTHROPIC_API_KEY=
ANTHROPIC_MODEL=
GOOGLE_API_KEY=
GOOGLE_MODEL=

LAYERBRAIN_PRIME is a publish/admin key only. WarGames does not use it for model inference.

Tasks

Tasks are mission + seed + split + reward profile.

wargames tasks --game redalert --split debug

Splits:

  • debug: tiny smoke tasks
  • train: tasks agents may learn from
  • validation: tune prompts/profile weights/max steps
  • test: held-out reported benchmark tasks
  • curriculum: ordered train tasks

The catalog rejects the same (mission_id, seed) appearing in multiple splits. It also rejects train_only reward profiles on test.

Agents

Agents are named YAML configs under agents/ or your own --agent-dir.

wargames agents list
wargames agents validate agents/scripted-wait.yaml

Example:

id: my-agent
driver: python
factory: my_project.agent:create_agent
provider: openai
model: ${OPENAI_MODEL}
api_key_env: OPENAI_API_KEY
base_url: ${OPENAI_BASE_URL}
config:
  temperature: 0.2
  top_p: 0.9
  max_tokens: 256
  timeout_seconds: 20
  disable_reasoning: false
  reject_reasoning_models: false
  reasoning_effort: medium
  extra_body:
    enable_thinking: true
    chat_template_kwargs:
      enable_thinking: true

The Python factory receives the AgentSpec and returns an object implementing:

async def start(task): ...
async def decide(obs): ...
async def close(): ...

For OpenAI-compatible providers, config is passed through to the local agent wrapper. Use it to choose model behavior per run. For fast non-thinking smoke runs, set disable_reasoning: true and keep max_tokens small. For models that need internal thinking, set disable_reasoning: false and pass the provider-specific extra_body they require. WarGames does not own those keys or settings; the agent config does.

Run Locally

wargames run \
  --task redalert.debug.smoke.seed-000000 \
  --agent scripted-wait \
  --watch none \
  --record summary_only

For demo/debug runs, record frames and export video later:

wargames run \
  --task redalert.debug.smoke.seed-000000 \
  --agent scripted-wait \
  --watch window \
  --record full \
  --video frames

wargames export <run_id> --out exports --video mp4

MP4 is export-only. Runs write frames; export turns frames into a shareable video.

Reward Profiles

List profiles:

wargames profile list --game redalert

Built-ins:

  • terminal: win/loss only
  • standard: terminal + mild dense shaping
  • dense: training-only dense profile
  • protective: defense-aligned profile that rewards friendly-force preservation
  • aggressive_stress_test: training-only contrast profile, blocked from test

Validate a profile YAML:

wargames profile validate scenarios/redalert/profiles/protective.yaml

Profiles are the behavior dial. The same model can be evaluated under different profiles to measure whether reward design changes behavior.

The full profile schema, every Red Alert reward field, built-in primitives, and Prime RL examples are documented in docs/reward_profiles.md.

Watching

Local:

wargames run --task ... --agent ... --watch window

Replay public events from disk:

wargames watch <run_id>

Public event files never include hidden state. Private traces are only written when explicitly requested.

Prime Intellect

The Prime implementation lives in wargames.environments.prime. The public Prime environment is layerbrain/wargames. environments/prime is only the thin publish wrapper.

uv pip install -e ./environments/prime
prime eval run wargames --config environments/prime/configs/eval-debug.toml -n 1 -r 1

Prime RL uses the shipped TOML configs. WarGames supplies the environment and reward signal; Prime/prime-rl owns rollouts, batching, GPUs, and gradient updates.

RL training changes behavior by changing reward_profile in the Prime config:

split = "train"
reward_profile = "protective"
recorder_mode = "none"
max_steps = 500
rollouts_per_example = 8

Use dense or protective on train/curriculum, then report against terminal or standard on test.

Tests

source venv/bin/activate
python -m unittest tests.evaluation tests.harness
python -m unittest discover -s environments/prime/tests/conformance

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wargames-0.1.0.tar.gz (276.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

wargames-0.1.0-py3-none-any.whl (247.8 kB view details)

Uploaded Python 3

File details

Details for the file wargames-0.1.0.tar.gz.

File metadata

  • Download URL: wargames-0.1.0.tar.gz
  • Upload date:
  • Size: 276.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for wargames-0.1.0.tar.gz
Algorithm Hash digest
SHA256 6c92c334867384a6245344f6e9ffc4fbdd5c50cb60fab5e0dd738eb3f3c940a4
MD5 202f58d985d7570582a41db974e20155
BLAKE2b-256 409140e7d22931cd09ad5f25b346bac7ed7236c7d9a827502296960257531d7f

See more details on using hashes here.

File details

Details for the file wargames-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: wargames-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 247.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.4

File hashes

Hashes for wargames-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 f02bc9d9b6a979704ea39778bcbd1993796ad96f8898ec23ab96fed9b873828f
MD5 f753a6ae62656a05529c05fdd2248f34
BLAKE2b-256 72b7218f05258d860f740e144416069bf8ca5d8c0d2f209dde399459861a719a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page