Skip to main content

LLM development debug layer - every API call recorded, nothing lost. Semantic cache included.

Project description

llm-devproxy

PyPI version Tests Python 3.10+ License: MIT

LLM development debug layer — every API call recorded, nothing lost.

A local debug layer that solves the common pain points of LLM app development.

  • Auto-records every API call — nothing is ever lost
  • Cache eliminates redundant costs — same requests return from DB
  • Prevents cost explosions — mock responses when limit is reached
  • Rewind like Git — "go back to step 3 and try again" in seconds

Install

pip install llm-devproxy                  # minimal
pip install "llm-devproxy[openai]"        # with OpenAI
pip install "llm-devproxy[anthropic]"     # with Anthropic
pip install "llm-devproxy[gemini]"        # with Gemini
pip install "llm-devproxy[proxy]"         # with proxy server
pip install "llm-devproxy[all]"           # everything

Usage — Library

OpenAI

import openai
from llm_devproxy import DevProxy

proxy = DevProxy(daily_limit_usd=1.0)
proxy.start_session("my_agent")

# Just wrap your existing client
client = proxy.wrap_openai(openai.OpenAI(api_key="sk-..."))

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

Anthropic

import anthropic
from llm_devproxy import DevProxy

proxy = DevProxy(daily_limit_usd=1.0)
proxy.start_session("my_agent")

client = proxy.wrap_anthropic(anthropic.Anthropic(api_key="sk-ant-..."))

response = client.messages.create(
    model="claude-sonnet-4-5",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.text)

Gemini

import google.generativeai as genai
from llm_devproxy import DevProxy

genai.configure(api_key="AI...")
proxy = DevProxy(daily_limit_usd=1.0)
proxy.start_session("my_agent")

model = proxy.wrap_gemini(genai.GenerativeModel("gemini-1.5-flash"))
response = model.generate_content("Hello")
print(response.text)

Usage — Proxy Server

llm-devproxy start --port 8080 --limit 1.0

Just change base_url in your app — nothing else:

# OpenAI
client = openai.OpenAI(
    api_key="sk-...",
    base_url="http://localhost:8080/openai/v1",
)

# Anthropic
client = anthropic.Anthropic(
    api_key="sk-ant-...",
    base_url="http://localhost:8080/anthropic/v1",
)

CLI

# List recent sessions
llm-devproxy history

# Show all steps in a session
llm-devproxy show my_agent

# Search through recorded prompts
llm-devproxy search "keyword"

# Rewind to step 3 (original history preserved)
llm-devproxy rewind my_agent --step 3

# Rewind and start a new branch
llm-devproxy rewind my_agent --step 3 --branch new_idea

# Show cost stats
llm-devproxy stats

Time Travel Use Cases

Resume an agent from the middle

proxy = DevProxy()

# Rewind yesterday's run to step 8
proxy.rewind("my_agent", step=8)

# Tweak the prompt and re-run → recorded as a new branch
client = proxy.wrap_openai(openai.OpenAI())
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Improved prompt"}]
)

Find something from days ago

llm-devproxy search "approach A"
# → session=my_agent, step=5

llm-devproxy rewind my_agent --step 5 --branch "revisit"

Zero API cost in CI/CD

# Same requests return from SQLite cache
# No API charges in GitHub Actions
proxy = DevProxy(cache_enabled=True)

Config

proxy = DevProxy(
    db_path=".llm_devproxy.db",  # SQLite path
    daily_limit_usd=1.0,          # daily cost limit
    session_limit_usd=None,       # per-session limit (optional)
    on_exceed="mock",             # "mock" or "block"
    cache_enabled=True,
    compress_after_days=30,
)

All data stays local

Everything is stored in .llm_devproxy.db (SQLite) on your machine. Nothing is sent to any external server.


Roadmap

  • Phase 1: Cache, cost guard, auto-record everything
  • Phase 2: Proxy server (OpenAI/Anthropic/Gemini compatible), CLI
  • Phase 3: Rewind, branches, tags, memos
  • Phase 4: Semantic cache
  • Phase 5: Web UI (history browser, cost dashboard)
  • Phase 6: Team sharing (cloud edition)

日本語版 README

llm-devproxy(日本語)

LLMアプリ開発中の「あるある」をすべて解決するローカルデバッグレイヤーです。

  • API呼び出しを全量自動記録 — 保存し忘れはありえない
  • キャッシュで無駄なAPI代ゼロ — 同じリクエストはDBから返す
  • コスト爆発を防ぐ — 上限設定でmockレスポンスを返す
  • Gitのように巻き戻せる — 「あのステップ3からやり直したい」が即できる

詳しい使い方は英語版をご覧ください(内容は同じです)。


License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_devproxy-0.2.0.tar.gz (30.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_devproxy-0.2.0-py3-none-any.whl (31.8 kB view details)

Uploaded Python 3

File details

Details for the file llm_devproxy-0.2.0.tar.gz.

File metadata

  • Download URL: llm_devproxy-0.2.0.tar.gz
  • Upload date:
  • Size: 30.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for llm_devproxy-0.2.0.tar.gz
Algorithm Hash digest
SHA256 ee599ce3519be3e4a0538ac5e5e471741e984fdd9dcb15502d86e16915b71c35
MD5 137a89d7b9b33c18244b7e5d2c5f0f0f
BLAKE2b-256 fca93576903469c6856414038bf94295cf6bf16ff45fa8c6ff2d185f1e86edad

See more details on using hashes here.

File details

Details for the file llm_devproxy-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: llm_devproxy-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 31.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.4

File hashes

Hashes for llm_devproxy-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9b9fba9efd5777d715d61b01922687ce96962c55eb95169b8daf1f462ffe3bfe
MD5 2eb0e8c511448ca52fa0b31e39b677ba
BLAKE2b-256 5a30f334c0972384f1dff4c6fcbaf4dd62e0411a15f3d550b0feeb8163656f72

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page