Skip to main content

MCP server that pings free coding LLM models across 21 providers in real-time, ranks by latency, and helps AI agents pick the fastest available model.

Project description

model-radar

MCP server that pings 130+ free coding LLM models across 17 providers in real-time, ranks them by latency, and helps AI agents pick the fastest available model.

Inspired by free-coding-models.

Install

pip install model-radar

Quick Start

1. Configure an API key

# Option A: Save to ~/.model-radar/config.json
model-radar configure nvidia nvapi-xxx

# Option B: Environment variable
export NVIDIA_API_KEY=nvapi-xxx

Or copy the template: cp config.example.json ~/.model-radar/config.json and edit it.

2. Add to your MCP client

Claude Code (~/.claude/settings.json):

{
  "mcpServers": {
    "model-radar": {
      "command": "model-radar",
      "args": ["serve"]
    }
  }
}

Cursor (.cursor/mcp.json in project root or ~/.cursor/mcp.json):

Stdio (default — Cursor starts the server):

{
  "mcpServers": {
    "model-radar": {
      "command": "/path/to/your/.venv/bin/model-radar",
      "args": ["serve"]
    }
  }
}

SSE (you run the server; Cursor connects by URL):

The server listens on one port and serves both Streamable HTTP (/mcp) and SSE (/sse, /messages/). Cursor tries Streamable HTTP first, then SSE, so it can connect as soon as the server is up.

# Terminal: start the server (leave it running)
model-radar serve --transport sse --port 8765

Then in Cursor MCP config use the URL http://127.0.0.1:8765 (or http://127.0.0.1:8765/mcp / http://127.0.0.1:8765/sse as your client expects). Start the server before opening the project so Cursor finds it immediately.

Web dashboard: With --web, the same server serves a localhost UI at http://127.0.0.1:8765/ for status, config, discovery, and running prompts (REST API at /api/*). MCP remains at /sse. Privacy: The server binds to 127.0.0.1 only; your API keys and data never leave your machine. Keys are stored only in ~/.model-radar/config.json (0o600).

model-radar serve --transport sse --port 8765 --web

Restarting the SSE server: After updating model-radar, restart the server so new tools appear. You can either restart the process manually, or run with a restart wrapper and use the restart_server() MCP tool:

# Allow the MCP tool to request exit; a loop restarts the server
export MODEL_RADAR_ALLOW_RESTART=1
while true; do model-radar serve --transport sse --port 8765; sleep 1; done

Then call the restart_server() tool (e.g. from an agent); the process exits, the loop starts a new one with updated code, and you reconnect.

OpenClaw (~/.openclaw/openclaw.json):

{
  "mcpServers": {
    "model-radar": {
      "command": "model-radar",
      "args": ["serve"]
    }
  }
}

3. CLI usage

# Scan models
model-radar scan --min-tier S --limit 10

# List providers
model-radar providers

# Save a key
model-radar configure nvidia nvapi-xxx

Providers (17)

Provider Env Var Free Tier
NVIDIA NIM NVIDIA_API_KEY Rate-limited, no expiry
Groq GROQ_API_KEY Free tier
Cerebras CEREBRAS_API_KEY Free tier
SambaNova SAMBANOVA_API_KEY $5 credits / 3 months
OpenRouter OPENROUTER_API_KEY 50 req/day on :free models
Hugging Face HF_TOKEN Free monthly credits
Replicate REPLICATE_API_TOKEN Dev quota
DeepInfra DEEPINFRA_API_KEY Free dev tier
Fireworks FIREWORKS_API_KEY $1 free credits
Codestral CODESTRAL_API_KEY 30 req/min, 2000/day
Hyperbolic HYPERBOLIC_API_KEY $1 free trial
Scaleway SCALEWAY_API_KEY 1M free tokens
Google AI GOOGLE_API_KEY 14.4K req/day
SiliconFlow SILICONFLOW_API_KEY Free model quotas
Together AI TOGETHER_API_KEY Credits vary
Cloudflare CLOUDFLARE_API_TOKEN 10K neurons/day
Perplexity PERPLEXITY_API_KEY Tiered limits

MCP Tools

  • list_providers() — See all 17 providers with config status
  • list_models(tier?, provider?, min_tier?) — Browse the model catalog
  • scan(tier?, provider?, min_tier?, configured_only?, limit?) — Ping models in parallel, ranked by latency
  • get_fastest(min_tier?, provider?, count?) — Quick: best N models right now
  • provider_status() — Per-provider health check
  • configure_key(provider, api_key) — Save an API key

Tier Scale (SWE-bench Verified)

Tier Score Meaning
S+ 70%+ Elite frontier coders
S 60-70% Excellent
A+ 50-60% Great
A 40-50% Good
A- 35-40% Decent
B+ 30-35% Average
B 20-30% Below average
C <20% Lightweight/edge

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_radar_mcp-0.5.1.tar.gz (50.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_radar_mcp-0.5.1-py3-none-any.whl (60.9 kB view details)

Uploaded Python 3

File details

Details for the file model_radar_mcp-0.5.1.tar.gz.

File metadata

  • Download URL: model_radar_mcp-0.5.1.tar.gz
  • Upload date:
  • Size: 50.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for model_radar_mcp-0.5.1.tar.gz
Algorithm Hash digest
SHA256 19d0836c46a7f37e4ff2a83f63f8f945ec701f208e2d9f5a728b503609bbd020
MD5 f3a98b72e90909c55ad3dc4eafadc53a
BLAKE2b-256 716b32f9403dfb901321368845aca0e513bbb378884b37b3ff325b7271794a71

See more details on using hashes here.

Provenance

The following attestation bundles were made for model_radar_mcp-0.5.1.tar.gz:

Publisher: publish.yml on srclight/model-radar

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file model_radar_mcp-0.5.1-py3-none-any.whl.

File metadata

File hashes

Hashes for model_radar_mcp-0.5.1-py3-none-any.whl
Algorithm Hash digest
SHA256 01c8408bbecd41c530c738cf59a72aa217e60f75101a6c2da7b10df9a65e24e7
MD5 10ae26baca2f42d1682f137fbe1e26ba
BLAKE2b-256 06022df23be4c1bfbc042439968fd5ab0fd790b9337b4691c9325f0424cae660

See more details on using hashes here.

Provenance

The following attestation bundles were made for model_radar_mcp-0.5.1-py3-none-any.whl:

Publisher: publish.yml on srclight/model-radar

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page