Skip to main content

A multi-game LLM benchmark for compact deterministic board games.

Project description

BoardGameBench

BoardGameBench is a GomokuBench-style benchmark for testing LLM move quality across a curriculum of compact deterministic board games. It is an extension of GomokuBench, generalizing the same search-vs-LLM idea from one game to a multi-game score.

Instead of scoring a model on one game, BoardGameBench runs the same model through several games and reports a normalized aggregate score:

  • fast win = up to 1 point
  • slower win = at least 0.75 points
  • draw = 0.5 points
  • loss = up to 0.35 points based on how long the LLM survives
  • illegal-move forfeit = 0 points

Each game has its own move horizon, so survival and speed are scored relative to that game's expected length. This means an LLM still earns credit for making a losing game last longer, while a winning LLM is rewarded for closing the game quickly.

The current default curriculum follows the first strong multi-game set:

  1. Connect Four
  2. Gomoku 19x19
  3. Breakthrough 6x6
  4. Dots and Boxes 3x3
  5. Othello 6x6
  6. Othello 8x8
  7. Hex 7x7

Each game has exact legal move generation, terminal-state detection, deterministic state updates, and a built-in alpha-beta style search opponent with game-specific evaluation. The engine is intentionally simple and auditable, so every result can be replayed from the saved JSON.

Quick Start

From this folder:

pip install .
boardgamebench list-games
boardgamebench play --game connect_four
boardgamebench benchmark --model-file ./models/example-openai-compatible.json -r 2
boardgamebench report

After installation:

pip install .
boardgamebench benchmark --model my-model -r 4

Model Configs

Model configs use the same OpenAI-compatible shape as GomokuBench:

{
  "provider": {
    "openrouter": {
      "name": "OpenRouter",
      "options": {
        "baseURL": "https://openrouter.ai/api/v1",
        "apiKeyEnv": "OPENROUTER_API_KEY"
      },
      "models": {
        "my-model": {
          "name": "My Model",
          "model": "provider/model-id",
          "rate_limit_rpm": 30,
          "timeout_seconds": 120
        }
      }
    }
  }
}

Put configs in models/<name>.json and run:

boardgamebench benchmark --model <name>

or pass a file directly:

boardgamebench benchmark --model-file /path/to/model.json

Choosing Games

Run the default curriculum:

boardgamebench benchmark --model my-model

By default, this runs 10 rounds per game. Use -r or --rounds to choose a different number:

boardgamebench benchmark --model my-model -r 20

Run a subset:

boardgamebench benchmark --model my-model --games connect_four,breakthrough_6x6,othello_6x6 -r 4

Available game ids:

  • connect_four
  • gomoku_19x19
  • breakthrough_6x6
  • dots_and_boxes_3x3
  • othello_6x6
  • othello_8x8
  • hex_7x7

See GAMES.md for the implemented curriculum and the planned stronger-oracle roadmap for the larger game list.

Outputs

Reports are saved in benchmarks/<model>.json and include:

  • model and provider metadata
  • aggregate score and per-game scores
  • per-round speed/survival scoring details
  • every move by the LLM and engine
  • raw LLM responses
  • final board states
  • a reasoning/API log path under /tmp/boardgamebench

To print a leaderboard table from saved benchmark files:

boardgamebench report

Notes

This repo is a benchmark harness, not a claim that the bundled engines are perfect solvers for every game. The design keeps the oracle interface pluggable so stronger sources can be dropped in later, such as Pascal Pons for Connect Four, Edax for Othello, MoHex/Benzene for Hex, or retrograde/proof databases for solved small variants.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

boardgamebench-0.1.1.tar.gz (21.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

boardgamebench-0.1.1-py3-none-any.whl (27.4 kB view details)

Uploaded Python 3

File details

Details for the file boardgamebench-0.1.1.tar.gz.

File metadata

  • Download URL: boardgamebench-0.1.1.tar.gz
  • Upload date:
  • Size: 21.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for boardgamebench-0.1.1.tar.gz
Algorithm Hash digest
SHA256 3441ca1e05d865385a757baeb4cc9e042f9f81219cb3113866f05626662cdf90
MD5 a0f4251bdd273023bc6edea354602fbc
BLAKE2b-256 cd3d07ade7923e90aa97f076e54ca4829b99ac2428dc3fa69426eeeaeb864a93

See more details on using hashes here.

File details

Details for the file boardgamebench-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: boardgamebench-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 27.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for boardgamebench-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ea7da722dffc7a367ca733322d8a8ed2c92b638b48a7a51f4242d3755f5312b5
MD5 388b2ec9998a1ceef2b3dd69c1c2a9ac
BLAKE2b-256 abb76682fae1af131179c63d9d2ae955ee7ab6120cc620573c538c1b27b3f00b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page