Preflight checks for LLM prototypes.
Project description
ModelPreflight
Find out which cheap or free-ish LLM endpoints can carry your prototype before you wire them into your app.
ModelPreflight turns scattered provider keys into stable local groups like free_reasoning
and free_fast, then lets you smoke-test prompts, fan out one-off questions, and fail over
between providers without hard-coding model IDs everywhere.
| If you want to... | Start here |
|---|---|
| See the free/dev endpoint menu | Free endpoint map |
| Try the payoff after setup | Run one check, then ask the pool |
| Get one green check quickly | 60-second start |
| Try it without keys | No-key demo path |
| Run project smoke cases | Smoke tests |
| Use it as a Python helper | Library usage |
ModelPreflight keeps provider setup machine-local and smoke cases project-local. It is not a hosted gateway, model leaderboard, or pricing oracle. It is the fast local preflight layer between "I found a promising free/dev endpoint" and "this provider is now wired into my product."
Free endpoint map
The high-value path is simple: collect provider keys once, let ModelPreflight group them, then
ask free_reasoning or free_fast instead of memorizing every provider's model slug and quota
page.
| Provider | What it gives a prototype | Default group | Key env var | Setup |
|---|---|---|---|---|
| OpenRouter | Lowest-friction first run; one API key can route to free-tagged and paid models | free_reasoning |
OPENROUTER_API_KEY |
Auth docs |
| NVIDIA Build / NIM | High-capability open/open-weight hosted endpoints while the current dev access fits | free_reasoning |
NVIDIA_NIM_API_KEY |
API keys |
| Groq | Very fast repeated calls for fanout and smoke checks when free-plan limits fit | free_fast |
GROQ_API_KEY |
Console keys |
| Cerebras | Fast inference experiments for short prototype loops | free_fast |
CEREBRAS_API_KEY |
Inference docs |
| Mistral | First-party checks against Mistral model families | free_reasoning |
MISTRAL_API_KEY |
Account setup |
The bundled presets are intentionally conservative starter data, not a claim that a provider will
remain free, available, or quota-identical for every account. Provider catalogs, free tiers, and
rate limits move; mpf doctor --live is the truth test for your machine today.
Secondary routes worth adding once the first pool works: Google Gemini/Gemma, Cloudflare Workers AI,
GitHub Models, Hugging Face Inference Providers, and SambaNova. See
docs/PROVIDER_PRESETS.md for the broader preset notes.
Run one check, then ask the pool
After one provider key is configured, first prove that the route works:
mpf demo
Shape of the output:
[
{
"id": "demo-ok",
"passed": true,
"failures": [],
"text": "ok"
}
]
Then ask a real one-off prompt with a single routed model call. Text output streams by default:
mpf ask "Write a poem about how ModelPreflight is the easiest way to try free LLM endpoints."
Shape of the output:
ModelPreflight finds the route,
checks the key, and sends it out...
Use pro when the prompt is worth asking several times. It fans out cheap samples, synthesizes the
best answer through the reasoning group, and keeps an audit trail for which routes handled the
calls.
mpf pro "Write the strongest short pitch for ModelPreflight Pro Mode: explain why fanout across cheap or free endpoints plus a judge pass is better than trusting one brittle LLM call for a prototype decision. Include one caveat." --n 8
Shape of the output:
{
"final": "Pro Mode is useful when a prototype decision deserves more than one sample: fan out across cheap or free routes, compare independent answers, then synthesize the strongest result through a reasoning group...",
"candidates": [
{"index": 0, "ok": true, "text": "Fanout reduces single-sample luck..."},
{"index": 1, "ok": true, "text": "The value is cheap parallel exploration..."}
],
"group_winners": []
}
For structured-output work:
mpf pro "Design three robust JSON schemas for extracting vendor name, renewal date, total contract value, and termination notice from messy SaaS contracts. Include failure modes." --n 8
For repeatable project checks, write JSONL smoke cases once and run:
mpf init-project
mpf run
mpf demo proves the configured route works. mpf ask is for a single one-off prompt. mpf pro
is for fanout plus synthesis. mpf run is for project-owned smoke files that should keep passing
as prompts, providers, and model slugs drift.
For the snappiest CLI startup, install once with uv tool install model-preflight or
pipx install model-preflight, then run mpf ... directly. uv run mpf ... may print package
sync messages before ModelPreflight starts.
Why this repo exists
Early LLM prototypes often need a quick answer to a practical question: "Can this prompt, model group, or provider route work well enough to keep building?"
ModelPreflight gives you a lightweight layer for that stage:
- one global config for provider credentials and routing
- project-local JSONL smoke cases
- stable aliases such as
free_reasoningandfree_fast - best-effort failover through LiteLLM
- audit records for live calls
When to use it
Use ModelPreflight when:
- a prototype needs cheap LLM smoke checks before deeper eval work
- several projects should share the same local provider setup
- you want logical groups instead of hard-coding provider/model IDs everywhere
- provider quotas, model slugs, or dev-tier availability may drift
- you need enough provenance to debug "which model answered this?"
What it is not
ModelPreflight is not:
- a model leaderboard
- a formal benchmark framework
- a hosted inference gateway
- a provider catalog authority
- proof that an endpoint is free, fast, or available today
Bundled provider presets are starter data. Check each provider's current catalog and terms before relying on a route.
60-second start
uvx model-preflight --help
# In a persistent tool or project environment:
uv tool install model-preflight
# or:
pipx install model-preflight
Set one supported provider key, initialize, and run one live check:
export OPENROUTER_API_KEY=...
# or: export NVIDIA_NIM_API_KEY=...
# or: export GROQ_API_KEY=...
# or: export CEREBRAS_API_KEY=...
# or: export MISTRAL_API_KEY=...
mpf init
mpf doctor --live
mpf demo
Expected signal:
mpf initwrites your machine-local config for the first visible provider key. If no supported key is visible, it writes the OpenRouter starter config and tells you to exportOPENROUTER_API_KEY.mpf doctor --liveprints a deployments table, thenlive check ok: group=....mpf demoprints JSON with"passed": trueand an empty"failures": []list.
Add checks to a project:
cd my-project
mpf init-project
mpf run
Expected signal:
mpf init-projectwritesevals/smoke.jsonl, writes.model-preflight/README.md, and updates.gitignore.mpf runprints JSON results for the starter cases. Every passing case has"passed": true.- A failing case exits non-zero and includes strings under
"failures"so you know what drifted.
Both mpf and model-preflight are installed as console scripts.
ModelPreflight catches missing keys, broken provider routes, prompt formatting regressions, output-shape drift, accidental model/provider changes, and "this worked yesterday" prototype failures before you wire the LLM call into something larger.
No-key demo path
Use the minimal offline preset when you want to test the CLI and project workflow without a provider account:
mpf init --preset minimal
mpf doctor --live
mpf demo
mpf init-project
mpf run
What this proves:
- Config loading works without secrets.
- The CLI can run a live-style check through the offline echo provider.
- Project bootstrap works by creating
evals/smoke.jsonl. - Smoke scoring works when
mpf runreturns JSON where every case has"passed": true.
What it does not prove: remote provider auth, quota, latency, or model quality. Use the OpenRouter path below for that.
Install options
PyPI or isolated tool install
uv tool install model-preflight
# or:
pipx install model-preflight
mpf --help
Project dependency
uv add --dev model-preflight
# or:
pip install model-preflight
Editable checkout
git clone https://github.com/pylit-ai/model-preflight.git
cd model-preflight
uv pip install -e .
# or from another repo:
uv add --dev --editable /absolute/path/to/model-preflight
ModelPreflight requires Python 3.11+.
Machine-local config
ModelPreflight reads provider routes and secret-source references from your OS-specific user config directory by default.
Use mpf paths to print the exact path. Override the path with either --config or
MODEL_PREFLIGHT_CONFIG.
mpf paths
mpf init
mpf doctor
mpf models
With no --provider or --preset, mpf init checks visible environment variables in this order:
OPENROUTER_API_KEY, NVIDIA_NIM_API_KEY, GROQ_API_KEY, CEREBRAS_API_KEY,
MISTRAL_API_KEY. OpenRouter is only the fallback starter when none of those keys are visible.
Explicit --provider and --preset always override auto-detection.
Provider keys are not stored in the config. For local cross-project use, link a machine-local dotenv file that stays outside this public package:
mpf setup --env-file /path/to/private/.env
Process env vars still win over linked dotenv values, which keeps CI and production behavior compatible with standard secret injection.
Provider setup is discoverable from the CLI:
mpf providers list
mpf providers guide nvidia
mpf providers guide openrouter
mpf providers test nvidia
mpf providers test openrouter
NVIDIA Build / NIM is the primary high-capability open/open-weight endpoint option. OpenRouter is still the lowest-friction discovery option because one API key can route to many model providers through an OpenAI-compatible API.
Use either primary path:
mpf setup --env-file /path/to/private/.env
mpf doctor --group free_reasoning --live
mpf init --provider openrouter
export OPENROUTER_API_KEY=...
mpf doctor --provider openrouter --live
For agent and CI readiness checks, make sure provider keys are visible in the agent process environment or through a linked machine-local secret source, then use JSON diagnostics:
mpf doctor --group free_reasoning --json
status: "ok" means config and required keys are present. error_code distinguishes
MISSING_REQUIRED_ENV, GROUP_NOT_FOUND, and disabled matching provider/group cases.
| Provider | Best for | Env var | Setup |
|---|---|---|---|
| NVIDIA Build / NIM | Primary high-capability open/open-weight endpoint pool | NVIDIA_NIM_API_KEY |
API keys |
| OpenRouter | One-key first run with broad model access | OPENROUTER_API_KEY |
Authentication docs |
| Groq | Fast repeated calls after first-run setup works | GROQ_API_KEY |
Groq console |
| Cerebras | Fast inference experiments when current dev-tier limits fit | CEREBRAS_API_KEY |
Cerebras inference docs |
| Mistral | First-party Mistral model-family smoke checks | MISTRAL_API_KEY |
Mistral API keys |
Secondary/overflow pool to add manually once the primary pool works: Google Gemini/Gemma,
Cloudflare Workers AI, GitHub Models, Hugging Face Inference Providers, and SambaNova. These are
documented in docs/PROVIDER_PRESETS.md, but not packaged as
first-run presets yet because auth shape, model IDs, and free/dev limits are more account-specific.
The default config creates logical groups, then maps each group to one or more LiteLLM deployments:
router:
num_retries: 1
timeout_seconds: 60
default_group: free_reasoning
audit_jsonl: null
artifacts_dir: ~/.cache/model-preflight/artifacts
deployments:
- name: nvidia_nim_nemotron_3_super
provider: nvidia
group: free_reasoning
model: nvidia_nim/nvidia/nemotron-3-super-120b-a12b
api_key_env: NVIDIA_NIM_API_KEY
enabled: true
required: true
status: best_effort
setup_url: https://build.nvidia.com/settings/api-keys
rpm: 10
tier: reasoning
Provider preset discipline
Provider presets are best-effort starter data, not authoritative claims about free availability.
- user-local config wins over bundled defaults
mpf doctorfails fast when required keys are missing- optional/disabled providers do not block first-run checks
- live checks should be opt-in in CI
- endpoint names, quotas, pricing, and behavior can change without this repo knowing
See docs/PROVIDER_PRESETS.md for the preset rules.
Custom config path
mpf init --config ./model-preflight.yaml
mpf doctor --config ./model-preflight.yaml
mpf doctor --config ./model-preflight.yaml --live
export MODEL_PREFLIGHT_CONFIG="$PWD/model-preflight.yaml"
mpf models
Use environment variables for secrets. Do not commit provider keys.
If you use 1Password, see docs/secrets/1password.md
for linked dotenv and op run examples. Run mpf init --provider <provider> once to create
the machine-local provider config.
Smoke tests
Smoke cases are JSONL files owned by the project that is doing the prototype work.
{"id":"basic-ok","prompt":"Return only: ok","expected_substrings":["ok"]}
{"id":"avoid-word","prompt":"Answer yes without using the word nope","forbidden_substrings":["nope"]}
Run them with:
mpf run
# or:
mpf run path/to/smoke_cases.jsonl
mpf run prints JSON results and exits non-zero if any case fails.
Case fields
Each smoke case supports:
id: stable case identifierprompt: user prompt sent to the configured model groupgroup: optional model group overrideexpected_substrings: strings that must appear in the answerforbidden_substrings: strings that must not appear in the answer
These checks are intentionally simple. They are meant to catch obvious routing, prompt, and regression problems before you spend time on heavier evals.
Ask
mpf ask sends one prompt through one configured model group and prints the model text to stdout.
Plain text streams as tokens arrive. Progress and route metadata go to stderr by default, so stdout
stays clean for pipes and command substitution. In an interactive terminal, stderr status lines are
styled and separated from the answer by a blank line. Use --quiet to suppress all stderr status
lines, or --hide-route to hide provider/model route metadata while keeping progress visible. JSON
output is buffered so it remains valid JSON and includes route metadata unless --hide-route is set.
mpf ask "Write a poem about how ModelPreflight is the easiest way to use free LLM endpoints."
mpf ask "Write a shell-safe tagline" --quiet
mpf ask "Which model route is this using?"
mpf ask "Keep route metadata hidden, but show progress" --hide-route
mpf ask "Summarize why free endpoint preflight matters" --no-stream
mpf ask "Return JSON only: {\"ok\": true}" --group free_reasoning --json
Use ask for quick manual checks, demos, and shell snippets. Use run when the same prompt should
become a repeatable smoke case.
Pro Mode
mpf pro fans out a one-off prompt, then synthesizes a final answer through a judge group.
mpf pro "Use fanout plus synthesis to choose a robust JSON schema strategy for extracting renewal terms from messy SaaS contracts. Return the final schema, validation rules, and the main failure mode." --n 8
Defaults:
| Option | Default | Role |
|---|---|---|
--n |
8 |
number of sampled answers |
--sample-group |
free_fast |
fanout group |
--judge-group |
free_reasoning |
synthesis group |
Cost and quota note
Fanout multiplies live provider calls. Keep --n low while testing, use restricted provider keys where available, and review provider dashboards when running against paid endpoints.
ModelPreflight records audit rows for live calls, but it does not enforce provider billing limits beyond your configured routing and provider-side controls.
Library usage
from model_preflight import ModelGateway, load_config, pro_mode
gateway = ModelGateway(load_config())
print(gateway.text("Return only: ok", group="free_reasoning"))
result = pro_mode(gateway, "Solve this toy puzzle", n=8)
print(result["final"])
The library API is intentionally thin:
load_config()reads the same machine-local config as the CLIModelGatewaywraps LiteLLM Router with stable group aliases and audit loggingpro_mode()runs fanout plus synthesis for one-off prototype prompts
Audit artifacts
By default, ModelPreflight writes audit logs under:
~/.cache/model-preflight/artifacts/audit.jsonl
Each live call should be traceable enough to debug provider drift:
- timestamp
- logical group
- resolved provider/model when returned by the provider
- prompt or case metadata
- latency
- token usage when available
- response id when available
See docs/EVAL_PROVENANCE.md for provenance expectations.
Repo adapters
| Path | Purpose |
|---|---|
examples/autoharness_provider.py |
Drop-in provider wrapper for AutoHarness-style experiments |
examples/gpt_pro_mode_refactor.py |
Example refactor from single-provider Pro Mode to shared routing |
examples/node_hook_example.mjs |
CLI bridge for JS or agent-hook projects |
skills/model-preflight/SKILL.md |
Optional coding-agent skill for consistent usage |
Command reference
mpf init --provider openrouter
mpf doctor --live
mpf demo
mpf ask "write a tiny launch blurb for ModelPreflight"
mpf init-project
mpf run
mpf providers list
mpf providers guide openrouter
mpf models
mpf pro "solve this toy task" --n 8
Contributor workflow
uv sync
uv run pytest
uv run ruff check .
uv run mypy src
Package metadata lives in pyproject.toml. Tests live under tests/.
Design principles
- Global provider routing lives in the path printed by
mpf paths. - Project-local checks define cases, scoring, fixtures, and artifacts.
- LiteLLM handles provider-specific API quirks.
- ModelPreflight adds stable aliases, lightweight failover, and audit logs.
- Deterministic tests should run before live provider checks.
For the product scope and non-goals, see docs/NORTHSTAR.md.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file model_preflight-0.1.9.tar.gz.
File metadata
- Download URL: model_preflight-0.1.9.tar.gz
- Upload date:
- Size: 1.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
03e8395b51d466795ff079cefdda440f36f1b35f77ce4fa560ac70bdef2e5e0f
|
|
| MD5 |
b45ec922805264f38c1ece832370953c
|
|
| BLAKE2b-256 |
4fcc1bc85a5cb87b6c7baaeb9cf5090f3053fa03484d83d42f0ed165270676b4
|
File details
Details for the file model_preflight-0.1.9-py3-none-any.whl.
File metadata
- Download URL: model_preflight-0.1.9-py3-none-any.whl
- Upload date:
- Size: 26.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
18e1e784d6350e87b58b0c98274d52a05d087ded47517634ebfe0eaf59b499cd
|
|
| MD5 |
d0c85466887916858ec3bbb4e3347e20
|
|
| BLAKE2b-256 |
2d9f35b7b003b7b01de64f8b930b1a95ede27c15567097b8d49b66449a02965e
|