Skip to main content

Play Red Alert with AI agents — LLMs, scripted bots, or RL

Project description

OpenRA-RL — Command AI To Play Red Alert

Play Red Alert with AI agents. LLMs, scripted bots, or RL — your agent commands armies in the classic RTS through a Python API.

PyPI Python CI License

WebsiteLeaderboardHuggingFaceDocsIssues


Quick Start

pip install openra-rl
openra-rl play

On first run, an interactive wizard helps you configure your LLM provider (OpenRouter, Ollama, or LM Studio). The CLI pulls the game server Docker image and starts everything automatically.

Skip the wizard

# Cloud (OpenRouter)
openra-rl play --provider openrouter --api-key sk-or-... --model anthropic/claude-sonnet-4-20250514

# Local (Ollama — free, no API key)
openra-rl play --provider ollama --model qwen3:32b

# Developer mode (skip Docker, run server locally)
openra-rl play --local --provider ollama --model qwen3:32b

# Reconfigure later
openra-rl config

Prerequisites

  • Docker — the game server runs in a container
  • Python 3.10+
  • An LLM endpoint (cloud API key or local model server)

CLI Reference

openra-rl play       Run the LLM agent (wizard on first use)
openra-rl config     Re-run the setup wizard
openra-rl server     start | stop | status | logs
openra-rl replay     watch | list | copy | stop
openra-rl bench      submit   Upload results to the leaderboard
openra-rl mcp-server Start MCP stdio server (for OpenClaw / Claude Desktop)
openra-rl doctor     Check system prerequisites
openra-rl version    Print version

MCP Server (OpenClaw / Claude Desktop)

OpenRA-RL exposes all 48 game tools as a standard MCP server:

openra-rl mcp-server

Add to your MCP client config (e.g. ~/.openclaw/openclaw.json):

{
  "mcpServers": {
    "openra-rl": {
      "command": "openra-rl",
      "args": ["mcp-server"]
    }
  }
}

Then chat: "Start a game of Red Alert on easy difficulty, build a base, and defeat the enemy."

Architecture

Component Language Role
OpenRA-RL Python Environment wrapper, agents, HTTP/WebSocket API
OpenRA (submodule) C# Modified game engine with embedded gRPC server
OpenEnv (pip dep) Python Standardized Gymnasium-style environment interface

Data flow: Agent <-> FastAPI (port 8000) <-> gRPC bridge (port 9999) <-> OpenRA game engine

The game runs at ~25 ticks/sec independent of agent speed. Observations use a DropOldest channel so the agent always sees the latest game state, even if it's slower than real time.

Full architecture diagram

OpenRA-RL System Architecture

Example Agents

Scripted Bot

A hardcoded state-machine bot that demonstrates all action types. Deploys MCV, builds a base, trains infantry, and attacks.

python examples/scripted_bot.py --url http://localhost:8000 --verbose --max-steps 2000

MCP Bot

A planning-aware bot that uses game knowledge tools (tech tree lookups, faction briefings, map analysis) to formulate strategy before playing.

python examples/mcp_bot.py --url http://localhost:8000 --verbose --max-turns 3000

LLM Agent

An AI agent powered by any OpenAI-compatible model. Supports cloud APIs (OpenRouter, OpenAI) and local model servers (Ollama, LM Studio).

python examples/llm_agent.py \
  --config examples/config-openrouter.yaml \
  --api-key sk-or-... \
  --verbose \
  --log-file game.log

CLI flags override config file values. See python examples/llm_agent.py --help for all options.

Configuration

OpenRA-RL uses a unified YAML config system. Settings are resolved with this precedence:

CLI flags > Environment variables > Config file > Built-in defaults

Config file

Copy and edit the default config:

cp config.yaml my-config.yaml
# Edit my-config.yaml, then:
python examples/llm_agent.py --config my-config.yaml

Key sections:

game:
  openra_path: "/opt/openra"      # Path to OpenRA installation
  map_name: "singles.oramap"      # Map to play
  headless: true                  # No GPU rendering
  record_replays: false           # Save .orarep replay files

opponent:
  bot_type: "normal"              # AI difficulty: easy, normal, hard
  ai_slot: "Multi0"              # AI player slot

planning:
  enabled: true                   # Pre-game planning phase
  max_turns: 10                   # Max planning turns
  max_time_s: 60.0                # Planning time limit

llm:
  base_url: "https://openrouter.ai/api/v1/chat/completions"
  model: "qwen/qwen3-coder-next"
  max_tokens: 1500
  temperature: null               # null = provider default

tools:
  categories:                     # Toggle tool groups on/off
    read: true
    knowledge: true
    movement: true
    production: true
    # ... see config.yaml for all categories
  disabled: []                    # Disable specific tools by name

alerts:
  under_attack: true
  low_power: true
  idle_production: true
  no_scouting: true
  # ... see config.yaml for all alerts

Example configs

File Use case
examples/config-openrouter.yaml Cloud LLM via OpenRouter (Claude, GPT, etc.)
examples/config-ollama.yaml Local LLM via Ollama
examples/config-lmstudio.yaml Local LLM via LM Studio
examples/config-minimal.yaml Reduced tool set for limited-context models

Environment variables

Variable Config path Description
OPENROUTER_API_KEY llm.api_key API key for OpenRouter
LLM_API_KEY llm.api_key Generic LLM API key (overrides OpenRouter key)
LLM_BASE_URL llm.base_url LLM endpoint URL
LLM_MODEL llm.model Model identifier
BOT_TYPE opponent.bot_type AI difficulty: easy, normal, hard
OPENRA_PATH game.openra_path Path to OpenRA installation
RECORD_REPLAYS game.record_replays Save replay files (true/false)
PLANNING_ENABLED planning.enabled Enable planning phase (true/false)

Using Local Models

Ollama

# Pull a model with tool-calling support
ollama pull qwen3:32b

# For models that need more context (default is often 2048-4096 tokens):
cat > /tmp/Modelfile <<EOF
FROM qwen3:32b
PARAMETER num_ctx 32768
EOF
ollama create qwen3-32k -f /tmp/Modelfile

# Run
openra-rl play --provider ollama --model qwen3-32k

Note: Not all Ollama models support tool calling. Check with ollama show <model> — the template must include a tools block. Models known to work: qwen3:32b, qwen3:4b.

LM Studio

  1. Load a model in LM Studio and start the local server (default port 1234)
  2. Run:
openra-rl play --provider lmstudio --model <model-name>

Docker

Server management

openra-rl server start              # Start game server container
openra-rl server start --port 9000  # Custom port
openra-rl server status             # Check if running
openra-rl server logs --follow      # Tail logs
openra-rl server stop               # Stop container

Docker Compose (development)

Service Command Description
openra-rl docker compose up openra-rl Headless game server (ports 8000, 9999)
agent docker compose up agent LLM agent (requires OPENROUTER_API_KEY)
mcp-bot docker compose run mcp-bot MCP bot
# LLM agent via Docker Compose
OPENROUTER_API_KEY=sk-or-... docker compose up agent

Replays

After each game, replays are automatically copied to ~/.openra-rl/replays/. Watch them in your browser:

openra-rl replay watch              # Watch the latest replay (opens browser via VNC)
openra-rl replay watch <file>       # Watch a specific .orarep file
openra-rl replay list               # List replays (Docker + local)
openra-rl replay copy               # Copy replays from Docker to local
openra-rl replay stop               # Stop the replay viewer

The replay viewer runs inside Docker using the same engine that recorded the game, so replays always play back correctly. The browser connects via noVNC — no local game install needed.

Version tracking: Each replay records which Docker image version was used. When you upgrade, old replays are still viewable using their original engine version.

Local Development (without Docker)

For running the game server natively (macOS/Linux):

Install dependencies

# Python
pip install -e ".[dev]"

# .NET 8.0 SDK
# macOS: brew install dotnet@8
# Ubuntu: sudo apt install dotnet-sdk-8.0

# Native libraries (macOS arm64)
brew install sdl2 openal-soft freetype luajit
cp $(brew --prefix sdl2)/lib/libSDL2.dylib OpenRA/bin/SDL2.dylib
cp $(brew --prefix openal-soft)/lib/libopenal.dylib OpenRA/bin/soft_oal.dylib
cp $(brew --prefix freetype)/lib/libfreetype.dylib OpenRA/bin/freetype6.dylib
cp $(brew --prefix luajit)/lib/libluajit-5.1.dylib OpenRA/bin/lua51.dylib

Build OpenRA

cd OpenRA && make && cd ..

Start the server

python openra_env/server/app.py

Run tests

pytest

Observation Space

Each tick, the agent receives structured game state:

Field Description
tick Current game tick
cash, ore, power_provided, power_drained Economy
units Own units with position, health, type, facing, stance, speed, attack range
buildings Own buildings with production queues, power, rally points
visible_enemies, visible_enemy_buildings Fog-of-war limited enemy intel
spatial_map 9-channel spatial tensor (terrain, height, resources, passability, fog, own buildings, own units, enemy buildings, enemy units)
military Kill/death costs, asset value, experience, order count
available_production What can currently be built

Action Space

18 action types available through the command API:

Category Actions
Movement move, attack_move, attack, stop
Production produce, cancel_production
Building place_building, sell, repair, power_down, set_rally_point, set_primary
Unit control deploy, guard, set_stance, enter_transport, unload, harvest

MCP Tools

The LLM agent interacts through 48 MCP (Model Context Protocol) tools organized into categories:

Category Tools Purpose
Read get_game_state, get_economy, get_units, get_buildings, get_enemies, get_production, get_map_info, get_exploration_status Query current game state
Knowledge lookup_unit, lookup_building, lookup_tech_tree, lookup_faction Static game data reference
Bulk Knowledge get_faction_briefing, get_map_analysis, batch_lookup Efficient batch queries
Planning start_planning_phase, end_planning_phase, get_opponent_intel, get_planning_status Pre-game strategy planning
Game Control advance Advance game ticks
Movement move_units, attack_move, attack_target, stop_units Unit movement commands
Production build_unit, build_structure, build_and_place Build units and structures
Building Actions place_building, cancel_production, deploy_unit, sell_building, repair_building, set_rally_point, guard_target, set_stance, harvest, power_down, set_primary Building and unit management
Placement get_valid_placements Query valid building locations
Unit Groups assign_group, add_to_group, get_groups, command_group Group management
Compound batch, plan Multi-action sequences
Utility get_replay_path, surrender Misc
Terrain get_terrain_at Terrain queries

Tools can be toggled per-category or individually via config.yaml.

Benchmark & Leaderboard

Game results are automatically submitted to the OpenRA-Bench leaderboard after each game. Disable with BENCH_UPLOAD=false or bench_upload: false in config.

Agent identity

Customize how your agent appears on the leaderboard:

# Environment variables
AGENT_NAME="DeathBot-9000" AGENT_TYPE="RL" openra-rl play

# Or in config.yaml
agent:
  agent_name: "DeathBot-9000"
  agent_type: "RL"
  agent_url: "https://github.com/user/deathbot"  # shown as link on leaderboard
Variable Config path Description
AGENT_NAME agent.agent_name Display name (default: model name)
AGENT_TYPE agent.agent_type Scripted / LLM / RL (default: auto-detect)
AGENT_URL agent.agent_url GitHub/project URL shown on leaderboard
BENCH_UPLOAD agent.bench_upload Auto-upload after each game (default: true)
BENCH_URL agent.bench_url Leaderboard URL

Manual submission

Upload a saved result (with optional replay file):

openra-rl bench submit result.json
openra-rl bench submit result.json --replay game.orarep --agent-name "MyBot"

Custom agents

If you're building your own agent (RL, CNN, multi-agent, etc.) that doesn't use the built-in LLM agent, use build_bench_export() to create a leaderboard submission from a final observation:

from openra_env.bench_export import build_bench_export

# obs = final observation from env.step()
export = build_bench_export(
    obs,
    agent_name="DeathBot-9000",
    agent_type="RL",
    opponent="Normal",
    agent_url="https://github.com/user/deathbot",
    replay_path="/path/to/replay.orarep",
)
# Saves JSON to ~/.openra-rl/bench-exports/ and returns dict with "path" key

Then submit:

openra-rl bench submit ~/.openra-rl/bench-exports/bench-DeathBot-9000-*.json --replay game.orarep

Project Structure

OpenRA-RL/
├── OpenRA/                     # Game engine (git submodule, C#)
├── openra_env/                 # Python package
│   ├── cli/                    #   CLI entry point (openra-rl command)
│   ├── mcp_server.py           #   Standard MCP server (stdio transport)
│   ├── client.py               #   WebSocket client
│   ├── config.py               #   Unified YAML configuration
│   ├── models.py               #   Pydantic data models
│   ├── game_data.py            #   Unit/building stats, tech tree
│   ├── reward.py               #   Multi-component reward function
│   ├── bench_export.py         #   Build leaderboard submissions from observations
│   ├── bench_submit.py         #   Upload results to OpenRA-Bench leaderboard
│   ├── opponent_intel.py       #   AI opponent profiles
│   ├── mcp_ws_client.py        #   MCP WebSocket client
│   ├── server/
│   │   ├── app.py              #     FastAPI application
│   │   ├── openra_environment.py  #  OpenEnv environment (reset/step/state)
│   │   ├── bridge_client.py    #     Async gRPC client
│   │   └── openra_process.py   #     OpenRA subprocess manager
│   └── generated/              #   Auto-generated protobuf stubs
├── examples/
│   ├── scripted_bot.py         #   Hardcoded strategy bot
│   ├── mcp_bot.py              #   MCP tool-based bot
│   ├── llm_agent.py            #   LLM-powered agent
│   └── config-*.yaml           #   Example configs (ollama, lmstudio, openrouter, minimal)
├── skill/                      # OpenClaw skill definition
├── proto/                      # Protobuf definitions (rl_bridge.proto)
├── tests/                      # Test suite
├── .github/workflows/          # CI, Docker publish, PyPI publish
├── config.yaml                 # Default configuration
├── docker-compose.yaml         # Service orchestration
├── Dockerfile                  # Game server image
└── Dockerfile.agent            # Lightweight agent image

Ecosystem

Repository Description
OpenRA-RL Python environment, agents, MCP server (this repo)
OpenRA Modified C# game engine with gRPC bridge
OpenRA-Bench Leaderboard & benchmark (live)
OpenRA-RL-Util Shared utilities — reward vectors, damage matrices, rubrics
OpenRA-RL-Training Scenario system, curriculum, GRPO training engine
OpenRA-RL-Website Documentation site (openra-rl.dev)
OpenEnv Gymnasium-style environment framework

License

GPL-3.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

openra_rl-0.4.1.tar.gz (938.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

openra_rl-0.4.1-py3-none-any.whl (139.8 kB view details)

Uploaded Python 3

File details

Details for the file openra_rl-0.4.1.tar.gz.

File metadata

  • Download URL: openra_rl-0.4.1.tar.gz
  • Upload date:
  • Size: 938.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for openra_rl-0.4.1.tar.gz
Algorithm Hash digest
SHA256 57cf6dfbac3b398fe4a2c9afe334c49662847f91a7992f51913f847dfefaa302
MD5 96fc8c8317ccf02d5c31110e7793dab3
BLAKE2b-256 b37133937836cab912ffe630f19ff42547344894083548dc530a2bdded9984ed

See more details on using hashes here.

Provenance

The following attestation bundles were made for openra_rl-0.4.1.tar.gz:

Publisher: pypi-publish.yml on yxc20089/OpenRA-RL

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file openra_rl-0.4.1-py3-none-any.whl.

File metadata

  • Download URL: openra_rl-0.4.1-py3-none-any.whl
  • Upload date:
  • Size: 139.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for openra_rl-0.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 acfeb9aa9c6a0861e6814c8c9472d887820e26b2e53e53f307ad139d113c86e0
MD5 d4e7f8a07ea5c32b13f478fc9e53650d
BLAKE2b-256 f90e06be2096d443e912924c827847657dd3ad0aa1148e9833c9777e6a5bd3a9

See more details on using hashes here.

Provenance

The following attestation bundles were made for openra_rl-0.4.1-py3-none-any.whl:

Publisher: pypi-publish.yml on yxc20089/OpenRA-RL

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page