Real-time observability dashboard for LLM applications. Track prompts, tokens, costs, and latency. One-line integration.
Project description
📡 LLM Radar
Real-time observability dashboard for LLM applications. Track every prompt, token count, cost, and latency across OpenAI and Anthropic — with one line of code.
from llm_radar import LLMRadar
radar = LLMRadar(app) # that's it
Dashboard → http://localhost:8000/__llm_radar
Installation
pip install llm-radar
With provider SDKs:
pip install "llm-radar[openai]" # + openai
pip install "llm-radar[anthropic]" # + anthropic
pip install "llm-radar[all]" # + both
Quick Start
OpenAI
from fastapi import FastAPI
from llm_radar import LLMRadar
import openai
app = FastAPI()
radar = LLMRadar(app) # intercepts all openai calls automatically
client = openai.OpenAI()
@app.get("/chat")
async def chat(message: str):
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": message}],
)
return {"reply": response.choices[0].message.content}
Anthropic
from fastapi import FastAPI
from llm_radar import LLMRadar
import anthropic
app = FastAPI()
radar = LLMRadar(app)
client = anthropic.Anthropic()
@app.get("/summarize")
async def summarize(text: str):
response = client.messages.create(
model="claude-haiku-4-5-20251001",
max_tokens=256,
messages=[{"role": "user", "content": text}],
)
return {"summary": response.content[0].text}
Use with fastapi-radar
llm-radar works alongside fastapi-radar — one app, two dashboards.
from fastapi import FastAPI
from fastapi_radar import Radar
from llm_radar import LLMRadarPlugin
app = FastAPI()
radar = Radar(app) # HTTP + SQL monitoring → /__radar/
llm = LLMRadarPlugin(app) # LLM monitoring → /__llm_radar
What Gets Tracked
| Signal | Captured |
|---|---|
| Prompt preview | ✅ First 500 chars of last user message |
| Response preview | ✅ First 500 chars of response |
| Input tokens | ✅ |
| Output tokens | ✅ |
| Cost (USD) | ✅ Auto-calculated from current pricing |
| Latency (ms) | ✅ End-to-end wall time |
| Model name | ✅ |
| Provider | ✅ openai / anthropic |
| Errors | ✅ With message |
| Async calls | ✅ |
Configuration
radar = LLMRadar(
app,
dashboard_path="/__llm_radar", # Custom path
max_calls=1000, # Max records to keep
retention_hours=24, # Data retention window
db_path="/var/data/llm", # Custom DB location
auth_dependency=my_auth_fn, # Optional FastAPI dependency
track_openai=True,
track_anthropic=True,
)
Securing the Dashboard
from fastapi import Depends, HTTPException
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
security = HTTPBearer()
def verify_token(creds: HTTPAuthorizationCredentials = Depends(security)):
if creds.credentials != "your-secret-token":
raise HTTPException(status_code=401, detail="Unauthorized")
radar = LLMRadar(app, auth_dependency=verify_token)
Supported Models & Pricing
Auto-detects cost for:
- OpenAI: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo, o1, o3-mini, o4-mini
- Anthropic: claude-opus-4, claude-sonnet-4, claude-haiku-4, claude-3.5-sonnet, claude-3-opus
Unrecognized models record 0 cost (no crash).
Contributing
git clone https://github.com/ganeshmandakapu/llm-radar
cd llm-radar
pip install -e ".[dev]"
License
MIT — Ganesh Mandakapu
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_radar-0.2.0.tar.gz.
File metadata
- Download URL: llm_radar-0.2.0.tar.gz
- Upload date:
- Size: 19.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a288ae817c39c6eb4f75a8ce7276d92ea5b09929b9b7d5a2f53b2938b7ef421a
|
|
| MD5 |
76730f49af9abbe2fdec0d0cf397667e
|
|
| BLAKE2b-256 |
c36fef0861fb5608ea2cba924af3b930dffe6fde552cc217f9b1a65f8c855891
|
Provenance
The following attestation bundles were made for llm_radar-0.2.0.tar.gz:
Publisher:
publish.yml on GaneshMandakapu/llm-radar
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_radar-0.2.0.tar.gz -
Subject digest:
a288ae817c39c6eb4f75a8ce7276d92ea5b09929b9b7d5a2f53b2938b7ef421a - Sigstore transparency entry: 1409300051
- Sigstore integration time:
-
Permalink:
GaneshMandakapu/llm-radar@5adcf4521d2f318c309d1291dbb6940e7e724d8e -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/GaneshMandakapu
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@5adcf4521d2f318c309d1291dbb6940e7e724d8e -
Trigger Event:
push
-
Statement type:
File details
Details for the file llm_radar-0.2.0-py3-none-any.whl.
File metadata
- Download URL: llm_radar-0.2.0-py3-none-any.whl
- Upload date:
- Size: 27.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dce5c148b048b93abee0beea61806fe603f50fa00e78197c8369b066a4582592
|
|
| MD5 |
a8a287a5c23224d62fcfe5ee27b000a7
|
|
| BLAKE2b-256 |
10537901b19f68c43d5115b44b87534aad76352ec9f957237ec6f2cf49814d46
|
Provenance
The following attestation bundles were made for llm_radar-0.2.0-py3-none-any.whl:
Publisher:
publish.yml on GaneshMandakapu/llm-radar
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
llm_radar-0.2.0-py3-none-any.whl -
Subject digest:
dce5c148b048b93abee0beea61806fe603f50fa00e78197c8369b066a4582592 - Sigstore transparency entry: 1409300058
- Sigstore integration time:
-
Permalink:
GaneshMandakapu/llm-radar@5adcf4521d2f318c309d1291dbb6940e7e724d8e -
Branch / Tag:
refs/tags/v0.2.0 - Owner: https://github.com/GaneshMandakapu
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@5adcf4521d2f318c309d1291dbb6940e7e724d8e -
Trigger Event:
push
-
Statement type: