Skip to main content

Local debugger for LLM API calls — waterfall breakdown of TTFT, cost, and token speed

Project description

llm-scope 🔭

A local-first LLM Proxy that visualizes exactly where your calls are slow. No Node.js. No Docker. No account.

Like RelayPlane... but for Python devs. Zero data leaves your machine.

Are your API calls feeling sluggish, but you don't know if it's the DNS, TTFT, or just the generation? llm-scope intercepts your API requests and gives you a sub-millisecond accurate waterfall breakdown right in your browser.

Dashboard Preview

Why llm-scope?

  • Python Native & Zero Config: pip install llm-scope-cli, change one line (base_url), and you're done.
  • Prompt Cache Analytics: Wondering if you should use deepseek-v4-flash or deepseek-v4-pro? llm-scope visualizes the TTFT difference and accurately calculates your Prompt Cache Hit Savings 💰. No more guesswork on your API bill.
  • Microsecond Precision Waterfall: Pinpoint precisely if latency is caused by TCP Handshake (connect), Prompt Processing (TTFT), or Decoding (generation).
  • Physical Isolation: We never upload your prompts to a cloud service. Unlike cloud tools (Helicone is now in maintenance mode post-acquisition), all your data is stored in a plain SQLite file locally at ~/.local/share/llm-scope/calls.db.

Installation

pip install llm-scope-cli

Quick Start

  1. Start the proxy and local dashboard:
llm-scope start

The dashboard will automatically open at http://localhost:7070. Press Ctrl+C to stop.

  1. Route your Python code through the scope:
export OPENAI_BASE_URL=http://localhost:7070/v1
export DEEPSEEK_API_KEY=sk-...
python your_script.py

Popular Target Workflows

🏎️ DeepSeek V4 Prompt Cache Savings Tracking

DeepSeek V4 introduces massive price cuts for cached contexts, but it's hard to know exactly how much you're saving. llm-scope automatically parses the prompt_cache_hit_tokens from DeepSeek's payload and shows you exactly how much your System Prompt cache is saving you, in dollars.

The dashboard header displays your cumulative cache savings in real time — 💰 saved: $2.45 — the kind of number worth screenshotting.

💻 Track your Background Cursor Spend

Cursor is incredibly fast, but it sends huge background contexts you might not be aware of. Track exactly what tokens are being consumed and distinctly tag them to separate them from your main project codebase:

  1. Open Cursor Settings.
  2. Go to Models / Advanced.
  3. Set your OpenAI Base URL to http://localhost:7070/tag/cursor/v1.

That's it! Watch the dashboard light up with every autocomplete and Chat request, all neatly labeled with the "cursor" badge.

🐍 OpenAI SDK Drop-in Replacement

Since llm-scope explicitly mirrors the OpenAI /v1/chat/completions specs, your existing code requires exactly zero changes other than injecting the base_url:

from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="http://localhost:7070/v1"  # Or add /tag/project-name/v1
)

# Call as usual – your request will be tracked locally in the dashboard!
response = client.chat.completions.create(
    model="deepseek-v4-pro",
    messages=[{"role": "user", "content": "Benchmark my latency."}],
    stream=True
)

License

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_scope_cli-0.1.0.tar.gz (254.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_scope_cli-0.1.0-py3-none-any.whl (32.6 kB view details)

Uploaded Python 3

File details

Details for the file llm_scope_cli-0.1.0.tar.gz.

File metadata

  • Download URL: llm_scope_cli-0.1.0.tar.gz
  • Upload date:
  • Size: 254.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for llm_scope_cli-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0b70e6e583a42b57c8dca2d9a58eb3a1ee3d8b605c45590e7be78f8133d01985
MD5 4beabe12e382724fe0afad7c71fc2077
BLAKE2b-256 2eadddd4053474b4e5c7c029814e4caa8695fb607ddb0128ef4cfc69f910f488

See more details on using hashes here.

File details

Details for the file llm_scope_cli-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: llm_scope_cli-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 32.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for llm_scope_cli-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0a793c5975870e924862b44282cc2c806dbedf83c451679fac1b6eb441f9912c
MD5 ac42514e3a9f2a32d3d4479b8bbda23f
BLAKE2b-256 18f98c227ecb046d9d87c2dc770db4d2ae0040f51282a2126a240cb75da22b0a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page