MCP server providing real-time AI model intelligence - pricing, capabilities, and recommendations
Project description
LLM Radar
Real-time AI Model Intelligence via MCP
Skip the search. Your AI already has current model info.
What is LLM Radar?
LLM Radar is an MCP server that gives your AI assistant current information about AI models from OpenAI, Anthropic, and Google.
The problem: AI assistants have training cutoffs. Ask about models and you get outdated recommendations, deprecated APIs, or hallucinated pricing.
The solution: Connect LLM Radar and your AI already knows what's available today:
- Fetching fresh data from provider APIs daily
- Enriching it with Claude for better descriptions
- Exposing it via MCP for any compatible client
MCP Server Setup
Option 1: Remote Server (Recommended)
Connect directly to the hosted MCP server - no installation needed:
Claude Desktop config (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"llm-radar": {
"url": "https://ajents.company/llm-radar/mcp"
}
}
}
Option 2: Local via npx/pip
# Install
pip install llm-radar-mcp
# Or run directly
pip install llm-radar-mcp && llm-radar-mcp
Claude Desktop config (local stdio):
{
"mcpServers": {
"llm-radar": {
"command": "llm-radar-mcp"
}
}
}
Option 3: Docker
docker run -p 8000:8000 ghcr.io/ajentsor/llm-radar:latest
Then connect to http://localhost:8000/sse
Available MCP Tools
Once connected, you can use these tools:
| Tool | Description |
|---|---|
query_models |
Search/filter models by provider, type, or modality support |
compare_models |
Side-by-side comparison of specific models |
get_model |
Get detailed info about a specific model by API ID |
list_model_ids |
List all available model IDs for a provider |
Example Queries
"What models support vision input?"
→ Uses query_models with input_modality="image"
"Compare GPT-4o, Claude Sonnet, and Gemini 2.5 Pro"
→ Uses compare_models with those model IDs
"List all OpenAI model IDs"
→ Uses list_model_ids with provider="openai"
Available Resources
The MCP server also exposes resources you can read directly:
| Resource URI | Description |
|---|---|
llm-radar://models/all |
Complete JSON data |
llm-radar://models/openai |
OpenAI models only |
llm-radar://models/anthropic |
Anthropic models only |
llm-radar://models/google |
Google models only |
llm-radar://highlights |
Curated recommendations |
How It Works
┌─────────────────────────────────────────────────────────────────┐
│ Daily GitHub Action (8am UTC) │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ OpenAI │ │Anthropic │ │ Google │ ← Fetch APIs │
│ │ API │ │ API │ │ API │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ └──────────────┼──────────────┘ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Claude │ ← Enrich & Format │
│ │ (Sonnet) │ │
│ └──────┬───────┘ │
│ │ │
│ ┌─────────────┼─────────────┐ │
│ ▼ ▼ ▼ │
│ ┌─────────┐ ┌──────────┐ ┌───────────┐ │
│ │models. │ │ MCP │ │ GitHub │ ← Deploy │
│ │ json │ │ Server │ │ Pages │ │
│ └─────────┘ └──────────┘ └───────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Data Format
Each model includes:
| Field | Description |
|---|---|
id |
API model identifier |
name |
Human-friendly name |
provider |
openai, anthropic, or google |
description |
What the model is best for |
context_window |
Max input tokens |
pricing |
Input/output cost per 1M tokens |
capabilities |
vision, function_calling, reasoning, etc. |
status |
active, preview, or deprecated |
released |
Release date |
recommended_for |
Use case suggestions |
Local Development
# Clone
git clone https://github.com/ajentsor/llm-radar.git
cd llm-radar
# Install
python3 -m venv venv
source venv/bin/activate
pip install -e ".[dev]"
# Run MCP server (stdio mode)
llm-radar-mcp
# Run MCP server (HTTP mode for testing)
llm-radar-mcp --http --port 8000
# Fetch fresh data (requires API keys)
cp .env.example .env
# Edit .env with your API keys
python3 -m llm_radar.fetch_models
python3 -m llm_radar.aggregate_with_claude
Project Structure
llm-radar/
├── src/llm_radar/ # Main package
│ ├── __init__.py
│ ├── mcp_server.py # MCP server implementation
│ ├── fetch_models.py # API fetchers
│ └── aggregate_with_claude.py # Claude enrichment
├── data/
│ ├── models.json # Structured model data
│ ├── MODELS.md # Human-readable reference
│ └── raw/ # Raw API responses
├── docs/ # GitHub Pages site
├── Dockerfile # Container build
├── docker-compose.yml # Local container setup
├── pyproject.toml # Python package config
└── .github/workflows/
└── update-models.yml # Daily cron job
Configuration
To run the data fetcher yourself:
# .env file
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AI...
For GitHub Actions, add these as repository secrets.
Self-Hosting
Docker Compose
version: '3.8'
services:
llm-radar:
image: ghcr.io/ajentsor/llm-radar:latest
ports:
- "8000:8000"
restart: unless-stopped
Cloudflare Workers / Fly.io / Railway
The MCP server supports HTTP/SSE transport, making it deployable to any platform that supports long-running HTTP connections.
Contributing
See CONTRIBUTING.md for guidelines.
Key areas for contribution:
- Additional providers (Cohere, Mistral, etc.)
- More MCP tools
- Better data enrichment prompts
- Documentation improvements
License
MIT License - see LICENSE
Built for developers who want accurate AI model info
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_radar_mcp-0.1.0.tar.gz.
File metadata
- Download URL: llm_radar_mcp-0.1.0.tar.gz
- Upload date:
- Size: 14.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a76a87ac60451184618ff46c511f1f2f9856ea61026a56cd9b2509008de39959
|
|
| MD5 |
ec295772c28fadfbb1d98410ac7cf8fc
|
|
| BLAKE2b-256 |
4c927a867d982fb2bc2c41ea93ffa80be815e2b2d34929711a14ff2b320533ef
|
File details
Details for the file llm_radar_mcp-0.1.0-py3-none-any.whl.
File metadata
- Download URL: llm_radar_mcp-0.1.0-py3-none-any.whl
- Upload date:
- Size: 15.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
85d5146b729d0af97751df5c46d8c2bb446ec9535a5d042d26873d0975406575
|
|
| MD5 |
551467b923487ef8cc15509eae3327e4
|
|
| BLAKE2b-256 |
68685875100d4328d489fd50d21ff5b5d0783285ad29b2633c815fe8e2b3dd48
|