Skip to main content

Catch LLM cost changes in code review. Infracost for LLM spend.

Project description

tokentoll

Catch LLM cost changes in code review. Infracost for LLM spend.

CI PyPI version License: MIT Python 3.10+

A CLI tool and GitHub Action that statically analyzes your code for LLM API calls, estimates their cost, and shows you the cost impact of every change -- in your terminal or as a PR comment. Zero runtime dependencies.

The Problem

A single model swap from gpt-4o-mini to gpt-4o increases costs 15x. A new API call in a hot path can add $10,000/month to your bill. These changes hide in normal code review.

tokentoll finds LLM API calls in your code, estimates their cost, and shows you the cost impact of every change -- before it hits production.

Quick Start

pip install tokentoll

# Scan current directory for LLM API calls and their costs
tokentoll scan .

# Show cost impact of your last commit
tokentoll diff HEAD~1

# Compare two branches
tokentoll diff main..feature-branch

GitHub Action

name: LLM Cost Diff
on:
  pull_request:
    paths:
      - "**.py"

permissions:
  pull-requests: write

jobs:
  cost-diff:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - uses: Jwrede/tokentoll@v1
        with:
          calls-per-month: "5000"

What It Detects

SDK Patterns Status
OpenAI chat.completions.create, responses.create Supported
Anthropic messages.create, messages.stream Supported
Google GenAI models.generate_content Supported
LiteLLM completion, acompletion Supported
LangChain ChatOpenAI, ChatAnthropic, init_chat_model Supported
JS/TS SDKs -- Planned

Example Output

tokentoll scan

LLM API Calls Detected
============================================================

File: src/agents/summarizer.py
  Line 42: openai client.chat.completions.create
           Model: gpt-4o | Max tokens: 4096
           Est. cost/call: $0.03 | Monthly (1000 calls/month per call site): $26.50

  Line 78: openai client.chat.completions.create
           Model: gpt-4o-mini | Max tokens: 1000
           Est. cost/call: $0.000301 | Monthly (1000 calls/month per call site): $0.30

--
Total estimated monthly cost: $26.80
  1000 calls/month per call site

tokentoll diff

LLM Cost Diff: main..feature-branch
============================================================

+ ADDED    src/agents/rewriter.py:35
           openai | Model: gpt-4o
           Est. cost/call: $0.03 | Monthly: +$26.50

~ MODIFIED src/agents/summarizer.py:42
           openai | Model: gpt-4o -> gpt-4o-mini
           Est. cost/call: $0.03 -> $0.000301 | Monthly: -$26.20

--
Monthly cost impact: +$0.30
  Added: 1 | Changed: 1 | Removed: 0
  1000 calls/month per call site

How It Works

  Source Code (.py files)
         |
         v
  +-------------+     +------------------+
  | AST Scanner |---->| SDK Detectors    |
  | (ast.parse) |     | OpenAI, Anthropic|
  +-------------+     | Google, LiteLLM  |
                       | LangChain        |
                       +------------------+
                              |
                              v
                       +------------------+
                       | Pricing Engine   |
                       | 2200+ models     |
                       | Auto-cached      |
                       +------------------+
                              |
                  +-----------+-----------+
                  |                       |
                  v                       v
           +------------+         +-------------+
           | Scan Report|         | Diff Engine  |
           | (costs)    |         | (old vs new) |
           +------------+         +-------------+
                  |                       |
                  v                       v
           +------------+         +-------------+
           | Table/JSON |         | Table/JSON/  |
           |            |         | PR Comment   |
           +------------+         +-------------+
  1. Parses Python files using the ast module to find LLM API calls
  2. Multi-pass constant propagation resolves model names through variables, os.getenv() fallbacks, class attributes, constructor args, dict contents, and **kwargs unpacking
  3. Looks up pricing from a local cache (sourced from LiteLLM, 2200+ models)
  4. For diff mode: compares calls between two git refs and computes the cost delta
  5. Outputs a cost report as a table, JSON, or GitHub PR comment

CLI Reference

tokentoll scan [PATH...] [--format table|json|markdown] [--calls-per-month N]
tokentoll diff [REF] [--base REF] [--head REF] [--format table|json|markdown|github-comment]
tokentoll update    # Update bundled pricing data

Pricing Data

Pricing is bundled and works offline. To update to the latest prices:

tokentoll update

Pricing data is sourced from LiteLLM's model_prices_and_context_window.json and covers 300+ models across OpenAI, Anthropic, Google, AWS Bedrock, Azure, and more.

Smart Variable Resolution

Real codebases rarely pass model names as string literals. tokentoll's multi-pass constant propagation engine follows:

DEFAULT_MODEL = os.getenv("MODEL", "gpt-4o")

class Config:
    model: str = DEFAULT_MODEL

config = Config()
kwargs = {"model": config.model, "max_tokens": 2000}
client.chat.completions.create(**kwargs)
# tokentoll resolves: model="gpt-4o", max_tokens=2000
  • Variable assignments (MODEL = "gpt-4o")
  • os.getenv() / os.environ.get() fallback values
  • Function default parameters
  • Class attribute defaults
  • Constructor argument propagation
  • Dict literal and subscript contents
  • **kwargs unpacking

Limitations

  • Cannot resolve models loaded from external config files or databases at runtime (these are flagged as dynamic but not priced)
  • Token estimates use a character/4 heuristic unless tiktoken is installed
  • Monthly estimates assume uniform call volume (configurable via --calls-per-month)
  • Python only for now (JS/TS support planned)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tokentoll-0.5.0.tar.gz (67.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tokentoll-0.5.0-py3-none-any.whl (64.0 kB view details)

Uploaded Python 3

File details

Details for the file tokentoll-0.5.0.tar.gz.

File metadata

  • Download URL: tokentoll-0.5.0.tar.gz
  • Upload date:
  • Size: 67.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for tokentoll-0.5.0.tar.gz
Algorithm Hash digest
SHA256 c1bec3521558ac0dc80bfa446308e6360c274e05fd017bd861a1b4313786cb39
MD5 e9a66d761fbcabaf47350e0b1000ac36
BLAKE2b-256 a855c0fe33e2e4ccf0292590027a7d920112039e3750cedde412adca4be0ff0a

See more details on using hashes here.

File details

Details for the file tokentoll-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: tokentoll-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 64.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for tokentoll-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 173d3149d50bc8059a08b4530582c7d3fa6177098852cd9b66c00ce9356e69d7
MD5 d528341c4f1cf4aa34d90c589c1dd935
BLAKE2b-256 c4c7f437029756e7118cae622bfec60ff16d603b9e635954468f8dac8839e73b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page