Skip to main content

Benchmark how agent-ready a code repository is for LLM coding agents.

Project description

agent-readiness

A benchmark for AI agent readiness of a code repository.

You bought the seats. Your team is using Claude Code, Cursor, Copilot, Cline. And the agents keep going off the rails on your codebase.

The model is the variable you can't change. The repo is what you can.

agent-readiness scans a repository and scores how ready it is for AI coding agents — across cognitive load, feedback loops, and flow — then hands you a prioritised punchlist of fixes. Like Lighthouse, but for AI agent readiness instead of page load.

$ agent-readiness scan .

AI Readiness  62 / 100

  Cognitive load     70 / 100
  Feedback loops     40 / 100   ← biggest drag
  Flow & reliability 75 / 100
  Safety             OK

Top friction (fix these first):
  1. test_command.discoverable — no test invocation found in Makefile,
     package.json, or pyproject.toml
  2. agent_docs.present — no AGENTS.md / CLAUDE.md / .cursorrules at root
  3. headless.no_setup_prompts — README mentions "log in to the dashboard"
     during setup; agents can't traverse this

Design principles

Agents are headless. We assume the agent has stdin / stdout / files / git / HTTP and nothing else. No browser, no dashboard, no clickable button. If important state is reachable only through a UI, it's invisible to the agent — and the repo loses points wherever that's true.

This applies to our own tool, too. agent-readiness is fully headless: no required interactive prompts, stable JSON via --json, exit codes that mean things, machine-readable findings.

Code quality counts only where it predicts agent success. Mega-files, ambiguous names, dead code, missing types — those have direct lines to agent failure modes and get measured. We don't reproduce the full SonarQube taxonomy. Other tools do that well.

Run untrusted code in Docker, always. Any check that executes code from the target repo runs inside a sandboxed container. See docs/SANDBOX.md.

What gets measured

See docs/RUBRIC.md for the full definition. Short version:

Pillar What it captures
Cognitive load What the agent must absorb to make a correct change.
Feedback loops How fast and clear is the signal after a change.
Flow / reliability Headless walkability + how often friction outside the task blocks the agent.
Safety & trust Secrets, destructive scripts, gitignore hygiene. (Cap, not weight.)

This repo's score

Dogfooding: agent-readiness scan . run against this repository itself.

╭─────────────────────────────╮
│  AI Readiness  100.0 / 100  │
╰─────────────────────────────╯
 Cognitive load      100.0  ████████████████████
 Feedback loops      100.0  ████████████████████
 Flow & reliability  100.0  ████████████████████
 Safety              100.0  ████████████████████

No findings. Looking good.

Score updated after each iteration as part of the development workflow.

Usage

# Static scan (no Docker needed)
agent-readiness scan .
agent-readiness scan . --json
agent-readiness scan . --fail-below 70        # exit 1 if score < 70 (CI gate)
agent-readiness scan . --only feedback        # filter to one pillar
agent-readiness scan . --baseline prev.json   # diff against a previous run
agent-readiness scan . --report report.html   # HTML report (requires jinja2)
agent-readiness scan . --badge badge.svg      # score badge SVG
agent-readiness scan . --sarif findings.sarif # SARIF for GitHub code scanning

# Runtime scan (executes tests inside Docker)
agent-readiness scan . --run

# Other commands
agent-readiness list-checks
agent-readiness explain manifest.detected
agent-readiness init                          # write .agent-readiness.toml

Status

All phases implemented (v0.1–v0.9). 22 checks across 4 pillars, Docker sandbox, HTML + SARIF renderers, CLI surface, plugin API. See docs/PLAN.md for the full roadmap and CHANGELOG.md for per-phase release notes.

License

MIT for the code; see LICENSE. The project name and logo are governed separately by TRADEMARK.md: forks are welcome, "agent-readiness" the brand is reserved for the canonical project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_readiness-2.1.0.tar.gz (136.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_readiness-2.1.0-py3-none-any.whl (117.2 kB view details)

Uploaded Python 3

File details

Details for the file agent_readiness-2.1.0.tar.gz.

File metadata

  • Download URL: agent_readiness-2.1.0.tar.gz
  • Upload date:
  • Size: 136.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for agent_readiness-2.1.0.tar.gz
Algorithm Hash digest
SHA256 7ddca1a8cfd929d158f5bff08755e3d6190eb211a9af8c7b923a83ec9be830ed
MD5 c0c1c707cd0e83d648c11b56eb238e6a
BLAKE2b-256 b5e01320b89a14919c0a3b62c870a1ba554245c6d8fe1d1996a4ae191327f4ff

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_readiness-2.1.0.tar.gz:

Publisher: release.yml on harrydaihaolin/agent-readiness

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file agent_readiness-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: agent_readiness-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 117.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.13

File hashes

Hashes for agent_readiness-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 352b1dc0525c1ab46ce1e5c04f1cb2c216af47e8476f7e068a966997a36e67f0
MD5 81b0fc2beeb820554d3060adfb947661
BLAKE2b-256 d630b60ef7bf90f10cc051ffa9d2d3abdd9f1c3e81c8c518d082465f475a6637

See more details on using hashes here.

Provenance

The following attestation bundles were made for agent_readiness-2.1.0-py3-none-any.whl:

Publisher: release.yml on harrydaihaolin/agent-readiness

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page