Skip to main content

MCP server that pings free coding LLM models across 17 providers in real-time, ranks by latency, and helps AI agents pick the fastest available model.

Project description

model-radar

MCP server that pings 130+ free coding LLM models across 17 providers in real-time, ranks them by latency, and helps AI agents pick the fastest available model.

Inspired by free-coding-models.

Install

pip install model-radar

Quick Start

1. Configure an API key

# Option A: Save to ~/.model-radar/config.json
model-radar configure nvidia nvapi-xxx

# Option B: Environment variable
export NVIDIA_API_KEY=nvapi-xxx

Or copy the template: cp config.example.json ~/.model-radar/config.json and edit it.

2. Add to your MCP client

Claude Code (~/.claude/settings.json):

{
  "mcpServers": {
    "model-radar": {
      "command": "model-radar",
      "args": ["serve"]
    }
  }
}

Cursor (.cursor/mcp.json in project root or ~/.cursor/mcp.json):

Stdio (default — Cursor starts the server):

{
  "mcpServers": {
    "model-radar": {
      "command": "/path/to/your/.venv/bin/model-radar",
      "args": ["serve"]
    }
  }
}

SSE (you run the server; Cursor connects by URL):

The server listens on one port and serves both Streamable HTTP (/mcp) and SSE (/sse, /messages/). Cursor tries Streamable HTTP first, then SSE, so it can connect as soon as the server is up.

# Terminal: start the server (leave it running)
model-radar serve --transport sse --port 8765

Then in Cursor MCP config use the URL http://127.0.0.1:8765 (or http://127.0.0.1:8765/mcp / http://127.0.0.1:8765/sse as your client expects). Start the server before opening the project so Cursor finds it immediately.

Web dashboard: With --web, the same server serves a localhost UI at http://127.0.0.1:8765/ for status, config, discovery, and running prompts (REST API at /api/*). MCP remains at /sse. Privacy: The server binds to 127.0.0.1 only; your API keys and data never leave your machine. Keys are stored only in ~/.model-radar/config.json (0o600).

model-radar serve --transport sse --port 8765 --web

Restarting the SSE server: After updating model-radar, restart the server so new tools appear. You can either restart the process manually, or run with a restart wrapper and use the restart_server() MCP tool:

# Allow the MCP tool to request exit; a loop restarts the server
export MODEL_RADAR_ALLOW_RESTART=1
while true; do model-radar serve --transport sse --port 8765; sleep 1; done

Then call the restart_server() tool (e.g. from an agent); the process exits, the loop starts a new one with updated code, and you reconnect.

OpenClaw (~/.openclaw/openclaw.json):

{
  "mcpServers": {
    "model-radar": {
      "command": "model-radar",
      "args": ["serve"]
    }
  }
}

3. CLI usage

# Scan models
model-radar scan --min-tier S --limit 10

# List providers
model-radar providers

# Save a key
model-radar configure nvidia nvapi-xxx

Providers (17)

Provider Env Var Free Tier
NVIDIA NIM NVIDIA_API_KEY Rate-limited, no expiry
Groq GROQ_API_KEY Free tier
Cerebras CEREBRAS_API_KEY Free tier
SambaNova SAMBANOVA_API_KEY $5 credits / 3 months
OpenRouter OPENROUTER_API_KEY 50 req/day on :free models
Hugging Face HF_TOKEN Free monthly credits
Replicate REPLICATE_API_TOKEN Dev quota
DeepInfra DEEPINFRA_API_KEY Free dev tier
Fireworks FIREWORKS_API_KEY $1 free credits
Codestral CODESTRAL_API_KEY 30 req/min, 2000/day
Hyperbolic HYPERBOLIC_API_KEY $1 free trial
Scaleway SCALEWAY_API_KEY 1M free tokens
Google AI GOOGLE_API_KEY 14.4K req/day
SiliconFlow SILICONFLOW_API_KEY Free model quotas
Together AI TOGETHER_API_KEY Credits vary
Cloudflare CLOUDFLARE_API_TOKEN 10K neurons/day
Perplexity PERPLEXITY_API_KEY Tiered limits

MCP Tools

  • list_providers() — See all 17 providers with config status
  • list_models(tier?, provider?, min_tier?) — Browse the model catalog
  • scan(tier?, provider?, min_tier?, configured_only?, limit?) — Ping models in parallel, ranked by latency
  • get_fastest(min_tier?, provider?, count?) — Quick: best N models right now
  • provider_status() — Per-provider health check
  • configure_key(provider, api_key) — Save an API key

Tier Scale (SWE-bench Verified)

Tier Score Meaning
S+ 70%+ Elite frontier coders
S 60-70% Excellent
A+ 50-60% Great
A 40-50% Good
A- 35-40% Decent
B+ 30-35% Average
B 20-30% Below average
C <20% Lightweight/edge

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

model_radar_mcp-0.4.0.tar.gz (48.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

model_radar_mcp-0.4.0-py3-none-any.whl (59.1 kB view details)

Uploaded Python 3

File details

Details for the file model_radar_mcp-0.4.0.tar.gz.

File metadata

  • Download URL: model_radar_mcp-0.4.0.tar.gz
  • Upload date:
  • Size: 48.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for model_radar_mcp-0.4.0.tar.gz
Algorithm Hash digest
SHA256 4b4f2f758c65da758a080f08ae1908fd98c75d7c6266b2c6b00f4935ea83d756
MD5 30cd7bfa90ebaac0f468809bf2f44d88
BLAKE2b-256 ff3e4e9f9715ea388ef5405762ba6e113bf775a7517138b8264ca3ca67de76b8

See more details on using hashes here.

Provenance

The following attestation bundles were made for model_radar_mcp-0.4.0.tar.gz:

Publisher: publish.yml on srclight/model-radar

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file model_radar_mcp-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for model_radar_mcp-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 92df4a54f524a07684c9d8600f2d1db3f662d183fec115ba58752e3b5af32259
MD5 221d7bf6151168d5799044044d7c1587
BLAKE2b-256 2cf07f805d3c6b4f39ed1c2a8754f7f4342378bbc03f690cd4cab8e10d5a41e7

See more details on using hashes here.

Provenance

The following attestation bundles were made for model_radar_mcp-0.4.0-py3-none-any.whl:

Publisher: publish.yml on srclight/model-radar

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page