Skip to main content

Score any URL for AI-agent readiness — llms.txt, JSON-LD, AI-bot robots.txt, canonical, MCP, meta, sitemap.

Project description

agent-readiness-cli

Score any URL for AI-agent readiness — llms.txt, JSON-LD, AI-bot robots.txt, canonical, MCP, meta, sitemap. One command, one number, no telemetry.

PyPI version License: MIT Python 3.10+

What it does

Audits a URL for how well it talks to ChatGPT, Claude, Perplexity, and other AI agents — and gives you a single 0-100 score with a per-section breakdown.

$ agent-ready https://example.com
✓ llms.txt               10/15  present, 4.2 KB, 12 URLs
✓ json-ld                23/25  3 block(s), types: Article, Organization, BreadcrumbList
✗ ai-bots-robots.txt      0/20  ClaudeBot, GPTBot disallowed at root
✓ canonical+hreflang     12/15  canonical=set, hreflang langs=['en','ru']
✗ mcp-card                0/10  no /.well-known/mcp.json (optional)
✓ meta                   10/10  10/10 of common signals
✓ sitemap                 5/5   valid, 1250 URLs

  Score: 60 / 100
  Tier: C  (middling — focus on ai-bots-robots.txt, mcp-card)

  Full report:    agent-ready --full https://example.com
  Remediation:    https://guardlabs.online/whiteglove/  (paid, $99-2499)

Why this exists

Every blog post about "AI SEO" tells you to "add llms.txt and JSON-LD." Nobody hands you a CLI that opens your site and tells you what's actually missing. This is that CLI.

It is intentionally:

  • Single file, ~500 LoC. Read it. Audit the audit.
  • No telemetry. It hits your URL only. No phone-home.
  • Deterministic. Same site → same score (modulo the site changing).
  • Transparent scoring. Every weight is in agent_ready/cli.py. Disagree? Open an issue or fork.

Install

pip install agent-readiness-cli

Or run from source (no install):

git clone https://github.com/sspoisk/agent-readiness-cli
cd agent-readiness-cli
python3 -m agent_ready.cli https://your-site.example

Requires Python 3.10+. Standard library only — no third-party deps.

Usage

agent-ready https://example.com              # human summary (default)
agent-ready --full https://example.com       # human summary + every finding
agent-ready --json https://example.com       # machine-readable JSON
agent-ready --csv https://example.com        # one CSV row (for monitoring)
agent-ready --quiet https://example.com      # just the integer score

Exit codes:

  • 0 — audit ran (regardless of score)
  • 2 — could not fetch (DNS, timeout, TLS, 4xx/5xx on the URL itself)
  • with --quiet — exit code is the band index: A=0, B=1, C=2, D=3, F=4

What gets checked

Section Weight What
llms.txt 15 presence, valid format (leading H1), at least 3 canonical URLs listed
json-ld 25 parseable, recognised @type from a curated list, at least two distinct types
ai-bots-robots.txt 20 rules for GPTBot / ClaudeBot / Claude-Web / PerplexityBot / Google-Extended / CCBot / Applebot-Extended / Bytespider
canonical+hreflang 15 self-canonical present, hreflang reciprocity, x-default for multi-lang
mcp-card 10 optional — /.well-known/mcp.json is valid JSON with name, description, endpoint
meta 10 description, og:title, og:description, twitter:card, <html lang=>
sitemap 5 /sitemap.xml exists, valid <urlset> or <sitemapindex>, ≥5 URLs
Total 100 A ≥ 90 · B ≥ 75 · C ≥ 55 · D ≥ 35 · F < 35

Full scoring math is in agent_ready/cli.py. One file, no ceremony.

CI usage

Drop it into a workflow to track your score over time:

- name: Audit AI-agent readiness
  run: |
    pip install agent-readiness-cli
    agent-ready --csv https://your-site.example >> readiness.csv
    agent-ready --quiet https://your-site.example

If you want the build to fail below a threshold, gate on the score:

SCORE=$(agent-ready --quiet https://your-site.example)
[ "$SCORE" -ge 75 ] || { echo "AI-readiness below 75"; exit 1; }

What it does NOT do

  • Crawl the whole site (it audits one URL — the homepage by default)
  • Fix anything for you (it tells you what to fix)
  • Check vulnerabilities (use OWASP ZAP for that)
  • Validate JSON-LD against full Schema.org grammar (it checks that types are recognised)
  • Score Core Web Vitals or accessibility (different concerns)

If you need any of those, this isn't the right tool.

Comparison with adjacent tools

  • firecrawl/llmstxt-generator — generates an llms.txt for you. We audit yours; we don't generate.
  • langchain-ai/mcpdoc — exposes llms-txt to IDEs as MCP. Different audience (developers wanting LLM context).
  • Google Rich Results Test — validates JSON-LD for Google specifically. Web UI only, no CLI.
  • NSHipster/sosumi.ai — Apple-docs to AI-readable, narrow scope.

agent-readiness-cli is the gap: a single CLI that audits the agent-readiness surface and gives you a number.

Need someone to fix the findings?

If your score is low and you don't want to fix it yourself:

  • DIY — read the report, follow the linked specs (we cite them in --full output).
  • Self-service auditGuardLabs Web-Audit Guardian from $99 runs continuously every 30 min, watches multi-language drift, security headers, and structure.
  • Hands-on white-glove auditGuardLabs White-Glove Web Audit · $2,499 — async-only, no calls. Custom report + 30-day async support + quarterly re-audit. We are the engineers behind this CLI.

This CLI is free and MIT-licensed forever, regardless of whether you ever buy anything.

Contributing

Bug reports and PRs welcome. The repo is one Python file plus tests; barriers to contribution are low. See CONTRIBUTING.md for details.

If you want to add or re-weight a check, propose the rationale in an issue first — we want every weight to be defensible.

License

MIT. See LICENSE.


Maintained by GuardLabs. The CLI is an open-source byproduct of running Web-Audit Guardian on real sites — multi-language e-commerce, agency client portfolios, AI-native SaaS. If your readiness matters and you want serious eyes on it, White-Glove is where we put them.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agent_readiness_cli-0.1.0.tar.gz (12.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agent_readiness_cli-0.1.0-py3-none-any.whl (11.7 kB view details)

Uploaded Python 3

File details

Details for the file agent_readiness_cli-0.1.0.tar.gz.

File metadata

  • Download URL: agent_readiness_cli-0.1.0.tar.gz
  • Upload date:
  • Size: 12.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for agent_readiness_cli-0.1.0.tar.gz
Algorithm Hash digest
SHA256 a758c1c5ba3833fa77bd55b622aa466b098f0d27f7f38d39f20801b2dd43235a
MD5 b57d85e4b3ef66d7c7339ad0975ac0f0
BLAKE2b-256 122b9305462025087e8bad2e90d0ca0c254963cd60309f6aba8f1e45c607a738

See more details on using hashes here.

File details

Details for the file agent_readiness_cli-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agent_readiness_cli-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1b9a422a334e98c58270d45cf907d85d064b15ba7b21f07dd367758c91710ed2
MD5 37eb12ad07ac12599782e597b94c88a5
BLAKE2b-256 4e01a9a1d699ac1979569519e141cae4ca6b4f48557629c74ff501467083d25d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page