MCP server that pings free coding LLM models across 17 providers in real-time, ranks by latency, and helps AI agents pick the fastest available model.
Project description
model-radar
MCP server that pings 130+ free coding LLM models across 17 providers in real-time, ranks them by latency, and helps AI agents pick the fastest available model.
Inspired by free-coding-models.
Install
pip install model-radar
Quick Start
1. Configure an API key
# Option A: Save to ~/.model-radar/config.json
model-radar configure nvidia nvapi-xxx
# Option B: Environment variable
export NVIDIA_API_KEY=nvapi-xxx
Or copy the template: cp config.example.json ~/.model-radar/config.json and edit it.
2. Add to your MCP client
Claude Code (~/.claude/settings.json):
{
"mcpServers": {
"model-radar": {
"command": "model-radar",
"args": ["serve"]
}
}
}
Cursor (.cursor/mcp.json in project root or ~/.cursor/mcp.json):
Stdio (default — Cursor starts the server):
{
"mcpServers": {
"model-radar": {
"command": "/path/to/your/.venv/bin/model-radar",
"args": ["serve"]
}
}
}
SSE (you run the server; Cursor connects by URL):
The server listens on one port and serves both Streamable HTTP (/mcp) and SSE (/sse, /messages/). Cursor tries Streamable HTTP first, then SSE, so it can connect as soon as the server is up.
# Terminal: start the server (leave it running)
model-radar serve --transport sse --port 8765
Then in Cursor MCP config use the URL http://127.0.0.1:8765 (or http://127.0.0.1:8765/mcp / http://127.0.0.1:8765/sse as your client expects). Start the server before opening the project so Cursor finds it immediately.
Web dashboard: With --web, the same server serves a localhost UI at http://127.0.0.1:8765/ for status, config, discovery, and running prompts (REST API at /api/*). MCP remains at /sse. Privacy: The server binds to 127.0.0.1 only; your API keys and data never leave your machine. Keys are stored only in ~/.model-radar/config.json (0o600).
model-radar serve --transport sse --port 8765 --web
Restarting the SSE server: After updating model-radar, restart the server so new tools appear. You can either restart the process manually, or run with a restart wrapper and use the restart_server() MCP tool:
# Allow the MCP tool to request exit; a loop restarts the server
export MODEL_RADAR_ALLOW_RESTART=1
while true; do model-radar serve --transport sse --port 8765; sleep 1; done
Then call the restart_server() tool (e.g. from an agent); the process exits, the loop starts a new one with updated code, and you reconnect.
OpenClaw (~/.openclaw/openclaw.json):
{
"mcpServers": {
"model-radar": {
"command": "model-radar",
"args": ["serve"]
}
}
}
3. CLI usage
# Scan models
model-radar scan --min-tier S --limit 10
# List providers
model-radar providers
# Save a key
model-radar configure nvidia nvapi-xxx
Providers (17)
| Provider | Env Var | Free Tier |
|---|---|---|
| NVIDIA NIM | NVIDIA_API_KEY |
Rate-limited, no expiry |
| Groq | GROQ_API_KEY |
Free tier |
| Cerebras | CEREBRAS_API_KEY |
Free tier |
| SambaNova | SAMBANOVA_API_KEY |
$5 credits / 3 months |
| OpenRouter | OPENROUTER_API_KEY |
50 req/day on :free models |
| Hugging Face | HF_TOKEN |
Free monthly credits |
| Replicate | REPLICATE_API_TOKEN |
Dev quota |
| DeepInfra | DEEPINFRA_API_KEY |
Free dev tier |
| Fireworks | FIREWORKS_API_KEY |
$1 free credits |
| Codestral | CODESTRAL_API_KEY |
30 req/min, 2000/day |
| Hyperbolic | HYPERBOLIC_API_KEY |
$1 free trial |
| Scaleway | SCALEWAY_API_KEY |
1M free tokens |
| Google AI | GOOGLE_API_KEY |
14.4K req/day |
| SiliconFlow | SILICONFLOW_API_KEY |
Free model quotas |
| Together AI | TOGETHER_API_KEY |
Credits vary |
| Cloudflare | CLOUDFLARE_API_TOKEN |
10K neurons/day |
| Perplexity | PERPLEXITY_API_KEY |
Tiered limits |
MCP Tools
list_providers()— See all 17 providers with config statuslist_models(tier?, provider?, min_tier?)— Browse the model catalogscan(tier?, provider?, min_tier?, configured_only?, limit?)— Ping models in parallel, ranked by latencyget_fastest(min_tier?, provider?, count?)— Quick: best N models right nowprovider_status()— Per-provider health checkconfigure_key(provider, api_key)— Save an API key
Tier Scale (SWE-bench Verified)
| Tier | Score | Meaning |
|---|---|---|
| S+ | 70%+ | Elite frontier coders |
| S | 60-70% | Excellent |
| A+ | 50-60% | Great |
| A | 40-50% | Good |
| A- | 35-40% | Decent |
| B+ | 30-35% | Average |
| B | 20-30% | Below average |
| C | <20% | Lightweight/edge |
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file model_radar_mcp-0.4.1.tar.gz.
File metadata
- Download URL: model_radar_mcp-0.4.1.tar.gz
- Upload date:
- Size: 48.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
45918788eb24250f203e085b7ec430ea187fc62b85b8322efc6c92dbedd4b3ae
|
|
| MD5 |
6cf1160ccba26afacb325ad1fc01b19c
|
|
| BLAKE2b-256 |
3a23f92d0e59c68b98965d9dc2d731f66bbd4b2a2ba593e0105db0913246196a
|
Provenance
The following attestation bundles were made for model_radar_mcp-0.4.1.tar.gz:
Publisher:
publish.yml on srclight/model-radar
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
model_radar_mcp-0.4.1.tar.gz -
Subject digest:
45918788eb24250f203e085b7ec430ea187fc62b85b8322efc6c92dbedd4b3ae - Sigstore transparency entry: 1002819694
- Sigstore integration time:
-
Permalink:
srclight/model-radar@215aa639e3e2df07829cdc1fced34fb0ab8daadd -
Branch / Tag:
refs/tags/v0.4.1 - Owner: https://github.com/srclight
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@215aa639e3e2df07829cdc1fced34fb0ab8daadd -
Trigger Event:
release
-
Statement type:
File details
Details for the file model_radar_mcp-0.4.1-py3-none-any.whl.
File metadata
- Download URL: model_radar_mcp-0.4.1-py3-none-any.whl
- Upload date:
- Size: 59.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ac039fcb9f683e5ef926f8a5501090d28cbc8016022844087e0e55a3f7b7fedc
|
|
| MD5 |
73359bae0f555b9d9e300b827a65da21
|
|
| BLAKE2b-256 |
0d2d99b8d8e2393dbc60735435c90c9513bc4360690b3ef1e7e72bf58122b69b
|
Provenance
The following attestation bundles were made for model_radar_mcp-0.4.1-py3-none-any.whl:
Publisher:
publish.yml on srclight/model-radar
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
model_radar_mcp-0.4.1-py3-none-any.whl -
Subject digest:
ac039fcb9f683e5ef926f8a5501090d28cbc8016022844087e0e55a3f7b7fedc - Sigstore transparency entry: 1002819697
- Sigstore integration time:
-
Permalink:
srclight/model-radar@215aa639e3e2df07829cdc1fced34fb0ab8daadd -
Branch / Tag:
refs/tags/v0.4.1 - Owner: https://github.com/srclight
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@215aa639e3e2df07829cdc1fced34fb0ab8daadd -
Trigger Event:
release
-
Statement type: