LLM cost optimization gateway — route prompts locally, compact context for the rest, track real savings.
Project description
CodeContext
Reduce your LLM costs by 40–70% automatically.
CodeContext is a drop-in gateway that sits between your code and the LLM API. It routes trivial prompts to a local model (or skips the call entirely), compacts the context it does send, and tracks exactly how much money it saved you — per request, per day, per model. Typical savings of 40–70% depend on workload mix; the built-in benchmark harness reports the number for your workload specifically.
your app ─────► CodeContext gateway ─────► OpenAI / Anthropic / Ollama / any OpenAI-compatible endpoint
│
├── routes trivial prompts locally (0 external tokens)
├── strips irrelevant files from context (−40–80% input tokens)
├── caps per-request cost (hard spend rail)
└── logs real tokens + real $ to SQLite (auditable savings)
Why it exists
If you're paying OpenAI or Anthropic for code-related work, three things are usually true:
- You're sending too much context. Most requests don't need the whole repo — they need 3–5 files. CodeContext ranks chunks with BM25 + embeddings and packs only what matters into a token budget you set.
- Many prompts don't need a frontier model. "Format this dict" doesn't need GPT-4. The gateway classifies every request and routes cheap ones to a local model or skips the external call entirely.
- You don't actually know what you're spending. Provider dashboards are lagged and aggregate. CodeContext writes every call to a local SQLite ledger with real token counts and real dollar cost.
It's built for engineers shipping AI-assisted tools, not for dashboards people.
Install
pip install promptrouter
Optional extras:
pip install promptrouter[embeddings] # semantic retrieval (sentence-transformers)
pip install promptrouter[tokens] # precise token counts (tiktoken)
pip install promptrouter[all] # both of the above
CodeContext itself has one required dependency (pathspec). Everything else is lazy-loaded and optional. The gateway speaks to OpenAI and Anthropic over plain HTTP — no SDK install needed.
Requires Python 3.10+.
Quickstart (60 seconds)
# 1. From the root of the project you want to index:
cd ~/code/my-project
# 2. Drop in a config (see example at .codecontext.toml in this repo):
cat > .codecontext.toml <<'EOF'
[llm_client]
enabled = true
provider = "openai"
model = "gpt-4o-mini"
api_key_env = "OPENAI_API_KEY"
EOF
# 3. Set your API key:
export OPENAI_API_KEY="sk-..."
# 4. Index the project:
codecontext index-project --root .
# 5. Ask a question. CodeContext builds a minimal context pack, routes the
# request, calls the model, and logs the real cost:
codecontext auto-start-request --root . \
--goal "where do we validate user email addresses?" \
--top-k 6 --token-budget 1500
# 6. See what you actually spent and saved:
codecontext sales-summary --window 7d
You'll get something like:
7-day window: 128 requests, 41 routed local, 87 to external.
External tokens sent: 94,312 (vs 312,447 naive baseline, −70%).
Spend: $0.72. Estimated savings vs full-context baseline: $2.31.
How it works (the short version)
Every request passes through five stages:
- Scan & index — Walks your project (respecting
.gitignore), extracts symbols, builds a BM25 index and optional semantic embeddings. - Classify — Decides whether the prompt needs external reasoning at all, or can be answered by a local model / canned route. This is where the biggest savings come from.
- Pack context — For prompts that do go external, assembles the top-K most relevant chunks into a token budget you control. Novelty penalty avoids duplicate content. Symbol body bonus keeps full function definitions together.
- Call the model — Speaks OpenAI, Anthropic, Ollama, or any OpenAI-compatible endpoint over HTTP. Caps per-request cost at
max_escalation_cost_per_requestUSD (default $0.08). - Log & learn — Writes real token counts and dollar cost to
.codecontext/data/codecontext.db. Every report in the CLI is backed by this ledger — not estimates, not dashboards.
Configuration
CodeContext looks for .codecontext.toml in the project root. A minimal file:
[llm_client]
enabled = true
provider = "openai" # openai | anthropic | ollama | openai_compatible
model = "gpt-4o-mini"
api_key_env = "OPENAI_API_KEY"
See .codecontext.toml in this repo for a fully commented example covering OpenAI, Anthropic, Ollama, and self-hosted / OpenAI-compatible endpoints (vLLM, LM Studio, Groq, Together, DeepSeek, RunPod).
Secrets rule: API keys live in environment variables, not in config files. api_key_env names the variable; CodeContext reads it at call time. Keys never touch the config file, git history, or crash dumps.
Key config knobs
| Field | Default | What it does |
|---|---|---|
default_context_budget_tokens |
4000 |
Max tokens CodeContext packs into context for external calls. Lower = cheaper, less context. |
max_escalation_cost_per_request |
0.08 |
Hard per-request spend cap in USD. Above this, the request is routed locally or blocked. |
enable_embeddings |
true |
Use semantic retrieval on top of BM25. Requires [embeddings] extra. Falls back gracefully if missing. |
llm_client.enabled |
false |
Gateway returns the outbound payload without calling the model. Useful for dry runs / CI. |
llm_client.max_tokens |
1024 |
Max output tokens. |
llm_client.temperature |
0.2 |
Low default — CodeContext is typically used for code reasoning, not creative generation. |
CLI reference
All commands take --root <path> (defaults to .) and emit JSON to stdout.
Indexing
codecontext scan-project # walk the tree, report what's indexable
codecontext index-project # build the full index
codecontext refresh-changed-files # fast re-index of modified files
Retrieval
codecontext search-project --query "rate limiter" --top-k 8
codecontext prepare-context-pack --goal "add retries to the HTTP client" --top-k 6 --token-budget 1500
Request execution (the main flow)
# End-to-end: classify, pack context, call model, log cost.
codecontext auto-start-request --goal "why is /login slow?" --top-k 6 --token-budget 1500
# Manual flow: pack context, get outbound payload, call your own client,
# then feed the response back in.
codecontext route-request --goal "..."
codecontext handle-remote-response --response-file reply.txt
Reporting
codecontext sales-summary --window 7d # human-readable savings summary
codecontext metrics-report --window 7d --top-n 10 # machine-readable metrics
codecontext usage-ledger-report --window 30d # every call, every token, every dollar
codecontext benchmark-report # run the built-in savings benchmark
Patching (experimental)
codecontext apply-patch --path src/foo.py \
--old-text "def broken(): pass" \
--new-text "def fixed(): return 1" \
--dry-run
codecontext rollback-patch --patch-id 42
HTTP API
For integration into apps, run the gateway as a local HTTP server:
codecontext serve-api --host 127.0.0.1 --port 8787
Then:
# Health
curl http://127.0.0.1:8787/health
# Route or run a prompt
curl -X POST http://127.0.0.1:8787/route-or-run \
-H "Content-Type: application/json" \
-d '{
"prompt": "check logs for the local-first runtime",
"options": { "top_k": 4, "token_budget": 1200 }
}'
# 7-day savings summary
curl "http://127.0.0.1:8787/sales-summary?window=7d"
See codecontext/PRODUCT_API.md for the full endpoint reference, request/response fields, and production-readiness checklist (auth, rate limits, and tenant isolation are currently the user's responsibility — the server is a local-demo surface, not a multi-tenant SaaS).
Python API
For finer-grained use inside your own code:
from pathlib import Path
from codecontext.config import AppConfig
from codecontext.gateway import CodeContextGateway
from codecontext.executor import AutoExecutor
config = AppConfig(Path("."))
gateway = CodeContextGateway(config)
executor = AutoExecutor(gateway)
# Index once:
from codecontext.summaries import SummaryManager
from codecontext.metrics import Metrics
SummaryManager(config).index_project(metrics=Metrics())
# Then route requests through the gateway:
result = executor.start(
goal="where is the rate limiter implemented?",
top_k=6,
token_budget=1500,
)
print(result["chosen_route"], result["cost_estimate"])
The gateway returns a structured dict: chosen_route, class_reason, cost_estimate, estimated_savings_vs_external, run_id, and more. Every field is documented in codecontext/PRODUCT_API.md.
What "40–70% savings" actually means
CodeContext ships with a benchmark harness that compares its routing + packing against a naive baseline ("send the whole file / whole prompt to the frontier model") on a fixed dataset of code-reasoning tasks. On that dataset, the reduction in external tokens is 40–70% depending on task mix — heavily local-intent workloads score higher, heavy reasoning workloads score lower.
Your number will vary. To measure it on your own workload:
codecontext benchmark-run --dataset default --runs 3
codecontext benchmark-sales-summary
The ledger behind every report is real: real tokens reported by the provider, real dollar cost from the published price tables. No extrapolation, no estimates pretending to be measurements.
Licence
MIT — see LICENSE. You can use CodeContext freely in commercial and private projects. Paid tiers bundle support, priority fixes, and proprietary benchmark datasets; the code itself is and will remain MIT.
Support & links
- Issues: github.com/batish52/codecontext/issues
- Changelog: CHANGELOG.md
- Sister project:
llm-costlog— the lightweight cost-logging library that funnels into CodeContext.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file promptrouter-0.1.1.tar.gz.
File metadata
- Download URL: promptrouter-0.1.1.tar.gz
- Upload date:
- Size: 111.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
59a76ab356e1c4fb87c03ab66d42a0dc92960d92ad65d1b3590e4c67455cad2f
|
|
| MD5 |
a68013195f31522a2d256519fb7bb2f7
|
|
| BLAKE2b-256 |
5f94f6ca62a50288b79bf9361108ebad88e287f10e8d79d5a6597bc9d88e83d8
|
File details
Details for the file promptrouter-0.1.1-py3-none-any.whl.
File metadata
- Download URL: promptrouter-0.1.1-py3-none-any.whl
- Upload date:
- Size: 121.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.8
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
72bfd9bedd482138b1d9792cad3dbdecf1ab7624b82454c31310132ae30b0faa
|
|
| MD5 |
173e4d73d4b8dd99d09d3f0bff6f9219
|
|
| BLAKE2b-256 |
2925cb143e28f9a08d9aa51f61831a653d9d06a265376e044bf617e35e03a320
|