Verify LLM API gateway authenticity. Local MCP server, your API key stays on your machine.
Project description
api-key-scanner
Verify whether an LLM API gateway (or 中转站 / third-party proxy) is actually serving the model it claims — without ever handing your API key to anyone else.
How it works
You point the tool at an endpoint, tell it what model the gateway claims to serve, and name the environment variable where your gateway API key lives. It runs a small probe set against the endpoint, compares the responses to publicly-signed reference fingerprints of real vendor models, and returns a trust score.
Everything runs on your machine. Your API key is read from a local env
var (never accepted in chat, never sent anywhere except the gateway you
named). Reference fingerprints are downloaded from our GitHub Releases
and Sigstore-verified before use — if the signature doesn't match the
identity of the GitHub Actions workflow that produced them, the tool
refuses the data and returns inconclusive.
The tool ships two pieces: an MCP server (on PyPI) and a skill that
teaches the agent when and how to call the server's verify_gateway
tool. Both pieces are platform-neutral; installation just differs per
host.
Prerequisites
The MCP server runs through uvx, which
fetches and isolates the Python package on every launch. If you don't
already have uv, install it once:
# macOS
brew install uv
# macOS / Linux (no Homebrew)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows (PowerShell)
irm https://astral.sh/uv/install.ps1 | iex
After install, uvx --version should print a version. Restart your
terminal (or your MCP client) if it doesn't show up yet.
Installation
Claude Code
In Claude Code:
/plugin marketplace add zhonghp/api-key-scanner
/plugin install api-key-scanner@zhonghp-api-key-scanner
That's everything — the plugin bundles the MCP server config and the skill. The server auto-downloads signed reference fingerprints on first use; no manual bootstrap.
OpenClaw
Two steps.
1. Add the MCP server to your OpenClaw config:
{
"mcp": {
"servers": {
"api-key-scanner": {
"command": "uvx",
"args": [
"--refresh-package", "api-key-scanner-mcp",
"api-key-scanner-mcp@latest"
]
}
}
}
}
2. Install the skill so OpenClaw knows when to invoke it:
mkdir -p ~/.openclaw/skills/api-verify
curl -fsSL \
https://raw.githubusercontent.com/zhonghp/api-key-scanner/main/skills/api-verify/SKILL.md \
-o ~/.openclaw/skills/api-verify/SKILL.md
Restart OpenClaw. That's it.
Other MCP clients (Cursor, Continue, Zed, Claude Desktop, …)
Any MCP client works — the server is just a standard stdio JSON-RPC
server. Point your client's MCP config at uvx --refresh-package api-key-scanner-mcp api-key-scanner-mcp@latest and you're done. Skills
are Claude Code / OpenClaw-specific; on other clients the user just
mentions the verify_gateway tool directly.
Supplying your gateway key
The server reads your gateway's API key from the local environment.
Put it in ~/.api-key-scanner/.env:
mkdir -p ~/.api-key-scanner
cat > ~/.api-key-scanner/.env <<'EOF'
MY_GATEWAY_KEY=sk-your-gateway-key
EOF
The server auto-loads this file at startup. A shell export works too,
but only if you set it before launching the MCP client — most clients
(Claude Code, OpenClaw, etc.) snapshot env at launch.
Using it
Ask in natural language:
帮我验证下
https://api.example.com/v1提供的 gpt-4o 模型是不是 真的。我的 key 放在MY_GATEWAY_KEY环境变量里。
(Phrase the question as "is the model at this URL genuine?", not "is this URL gpt-4o?" — the former makes the authenticity question clear.)
You get a verdict with:
trust_score— 0.0 to 1.0verdict—ok/suspicious/likely_substituted/inconclusiveconfidence—low/medium/high- Detector breakdown (
d1_banner_match,d2_met,d4_metadata)
Cutoffs:
>= 0.90→ responses are consistent with the claimed model0.70 – 0.90→ drift worth a deeper probe< 0.70→ the model behind the endpoint likely isn't what's claimedinconclusive→ the verdict explains which step failed
Budget: cheap (~18 calls, default — smoke test) or standard
(~258 calls, the full MET paper protocol). Higher = higher confidence,
more calls on your gateway. Start with cheap to sanity-check the
endpoint; escalate to standard when a suspicious result warrants
the extra statistical power.
Which models can it verify?
Only models for which we've published a signed reference fingerprint.
The current coverage — with canonical IDs, vendor model names, and the
source endpoint each fingerprint was collected from — is tracked in
SUPPORTED_MODELS.md. If the model you care about isn't
listed there, verify_gateway will return inconclusive — the tool
doesn't guess.
For the live list at runtime, just ask in chat:
What models does api-key-scanner currently support?
The agent calls list_supported_models, which reads the latest
release's MANIFEST.json directly — this will always be accurate even
if SUPPORTED_MODELS.md hasn't been updated yet.
What it catches
- Cross-family substitution — endpoint claims gpt-4o but serves Llama / Qwen / Claude / Gemini.
- Cached replay — endpoint returns canned vendor-style answers instead of a real model call.
- System-prompt tampering — endpoint silently injects a hidden system prompt that changes behavior.
What it doesn't catch (yet)
- Same-family downgrade — Opus → Sonnet → Haiku,
gpt-4o → gpt-4o-mini. We sometimes flag
suspiciousbut don't commit to a verdict. - Quantization — same model at a lower precision. Academic results show black-box detection is ≈ random.
- Adaptive routing — the gateway plays real-model to "identify yourself" probes and cheap-model to everything else. Needs an out-of-band trust anchor to solve.
Treat any likely_substituted as a signal to dig deeper, not as proof
of fraud. Trust scores are statistical, not legal.
Updates
| Something updates | What you need to do |
|---|---|
| New weekly fingerprint snapshot | Nothing — the server checks GitHub on each startup and auto-upgrades. |
| New MCP server version | Restart your MCP client; uvx pulls @latest. |
| Plugin manifest or skill changes | Claude Code: /plugin marketplace update zhonghp-api-key-scanner, then restart. OpenClaw: re-copy SKILL.md (see install). |
| Cache problems — old version won't let go | uv cache clean api-key-scanner-mcp, then restart. |
Privacy
- Your API key is read locally from your
.envor environment. Never logged, never sent anywhere except the target gateway you specified. Key substrings are scrubbed from any error that surfaces in verdicts. - Gateway responses are analyzed in-process. Nothing is uploaded.
- Outbound network from this tool is limited to (1) the gateway
URL you provided and (2)
github.com/zhonghp/api-key-scannerreleases for the signed reference data. - No backend, by design. There's nowhere for us to log your traffic.
Reference-data integrity
Fingerprint snapshots are signed with Sigstore
keyless OIDC. The signing identity is bound to our weekly GitHub Actions
workflow. The server refuses to load a release whose signature identity
doesn't match — it degrades to inconclusive rather than return a
verdict based on unverified reference data.
Air-gapped use
Download a fingerprint release on another machine, copy it over, then point the server at the local directory:
export APIGUARD_FINGERPRINT_DIR=/path/to/fingerprint-YYYY-MM-DD
(Set this in ~/.api-key-scanner/.env so the MCP client's subprocess
picks it up.) The Sigstore check is still enforced; you'll need the
MANIFEST.json.sigstore.json in the directory.
Acknowledgments
The detection methodology in this project builds directly on two open-source research efforts, both MIT-licensed. Enormous thanks to their authors for making everything — code, weights, datasets — publicly available.
-
LLMmap — Dario Pasquini et al., "LLMmap: Fingerprinting for Large Language Models" (USENIX Security 2025, arXiv:2407.15847). Our D1 detector uses LLMmap's default 8-query strategy (from
confs/queries/default.json). The current D1 implementation is a lightweight banner-match approximation — the full trained-classifier version is planned as a future upgrade. -
Model Equality Testing — Irena Gao et al., "Model Equality Testing: Which Model Is This API Serving?" (arXiv:2410.20247). Our D2 detector vendors this project's MMD² Hamming kernel and permutation p-value procedure. The prompt style (Wikipedia continuation tasks) and the N=250 sample protocol both come from the paper's main experimental configuration.
License
- Code: Apache-2.0
- Fingerprint data (GitHub Release assets): CC-BY-4.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file api_key_scanner_mcp-0.2.0.tar.gz.
File metadata
- Download URL: api_key_scanner_mcp-0.2.0.tar.gz
- Upload date:
- Size: 217.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
efbef111e4fc54a76e9bcac9236a0cb5d6ac8d1d5914618a650243be1bf5f022
|
|
| MD5 |
09bde2b6506eeb3fc93d3de255572dde
|
|
| BLAKE2b-256 |
e1de1de86ed34cbb4886c3a283023cd74de7fd0a3c53f9265dffac73cd2f1a62
|
Provenance
The following attestation bundles were made for api_key_scanner_mcp-0.2.0.tar.gz:
Publisher:
release.yml on zhonghp/api-key-scanner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
api_key_scanner_mcp-0.2.0.tar.gz -
Subject digest:
efbef111e4fc54a76e9bcac9236a0cb5d6ac8d1d5914618a650243be1bf5f022 - Sigstore transparency entry: 1361143376
- Sigstore integration time:
-
Permalink:
zhonghp/api-key-scanner@48d3ed0dd38fb8a63f776253df9b86eb01ed8561 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/zhonghp
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@48d3ed0dd38fb8a63f776253df9b86eb01ed8561 -
Trigger Event:
push
-
Statement type:
File details
Details for the file api_key_scanner_mcp-0.2.0-py3-none-any.whl.
File metadata
- Download URL: api_key_scanner_mcp-0.2.0-py3-none-any.whl
- Upload date:
- Size: 52.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7f43927d9bd3d165300f9901ca2e7dca8a94d85c0730afa8e3786be4e279c806
|
|
| MD5 |
29e4b9eb97ccc3a79973434350cba703
|
|
| BLAKE2b-256 |
d42b5ccd1974d8d3a7748fc7d8ea62203744a0b38d4808afc836b31ce8122644
|
Provenance
The following attestation bundles were made for api_key_scanner_mcp-0.2.0-py3-none-any.whl:
Publisher:
release.yml on zhonghp/api-key-scanner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
api_key_scanner_mcp-0.2.0-py3-none-any.whl -
Subject digest:
7f43927d9bd3d165300f9901ca2e7dca8a94d85c0730afa8e3786be4e279c806 - Sigstore transparency entry: 1361143381
- Sigstore integration time:
-
Permalink:
zhonghp/api-key-scanner@48d3ed0dd38fb8a63f776253df9b86eb01ed8561 -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/zhonghp
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@48d3ed0dd38fb8a63f776253df9b86eb01ed8561 -
Trigger Event:
push
-
Statement type: