Skip to main content

Catch LLM cost changes in code review. Infracost for LLM spend.

Project description

tokentoll

Catch LLM cost changes in code review. Infracost for LLM spend.

CI PyPI version License: MIT Python 3.10+

A CLI tool and GitHub Action that statically analyzes your code for LLM API calls, estimates their cost, and shows you the cost impact of every change -- in your terminal or as a PR comment. Zero runtime dependencies.

The Problem

A single model swap from gpt-4o-mini to gpt-4o increases costs 15x. A new API call in a hot path can add $10,000/month to your bill. These changes hide in normal code review.

tokentoll finds LLM API calls in your code, estimates their cost, and shows you the cost impact of every change -- before it hits production.

Quick Start

pip install tokentoll

# Scan current directory for LLM API calls and their costs
tokentoll scan .

# Show cost impact of your last commit
tokentoll diff HEAD~1

# Compare two branches
tokentoll diff main..feature-branch

GitHub Action

name: LLM Cost Diff
on:
  pull_request:
    paths:
      - "**.py"

permissions:
  pull-requests: write

jobs:
  cost-diff:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0

      - uses: Jwrede/tokentoll@v1
        with:
          calls-per-month: "5000"

What It Detects

SDK Patterns Status
OpenAI chat.completions.create, responses.create Supported
Anthropic messages.create, messages.stream Supported
Google GenAI models.generate_content Supported
LiteLLM completion, acompletion Supported
LangChain ChatOpenAI, ChatAnthropic, init_chat_model Supported
JS/TS SDKs -- Planned

Example Output

tokentoll scan

LLM API Calls Detected
============================================================

File: src/agents/summarizer.py
  Line 42: openai client.chat.completions.create
           Model: gpt-4o | Max tokens: 4096
           Est. cost/call: $0.03 | Monthly (1000 calls/month per call site): $26.50

  Line 78: openai client.chat.completions.create
           Model: gpt-4o-mini | Max tokens: 1000
           Est. cost/call: $0.000301 | Monthly (1000 calls/month per call site): $0.30

--
Total estimated monthly cost: $26.80
  1000 calls/month per call site

tokentoll diff

LLM Cost Diff: main..feature-branch
============================================================

+ ADDED    src/agents/rewriter.py:35
           openai | Model: gpt-4o
           Est. cost/call: $0.03 | Monthly: +$26.50

~ MODIFIED src/agents/summarizer.py:42
           openai | Model: gpt-4o -> gpt-4o-mini
           Est. cost/call: $0.03 -> $0.000301 | Monthly: -$26.20

--
Monthly cost impact: +$0.30
  Added: 1 | Changed: 1 | Removed: 0
  1000 calls/month per call site

How It Works

  Source Code (.py files)
         |
         v
  +-------------+     +------------------+
  | AST Scanner |---->| SDK Detectors    |
  | (ast.parse) |     | OpenAI, Anthropic|
  +-------------+     | Google, LiteLLM  |
                       | LangChain        |
                       +------------------+
                              |
                              v
                       +------------------+
                       | Pricing Engine   |
                       | 2200+ models     |
                       | Auto-cached      |
                       +------------------+
                              |
                  +-----------+-----------+
                  |                       |
                  v                       v
           +------------+         +-------------+
           | Scan Report|         | Diff Engine  |
           | (costs)    |         | (old vs new) |
           +------------+         +-------------+
                  |                       |
                  v                       v
           +------------+         +-------------+
           | Table/JSON |         | Table/JSON/  |
           |            |         | PR Comment   |
           +------------+         +-------------+
  1. Parses Python files using the ast module to find LLM API calls
  2. Multi-pass constant propagation resolves model names through variables, os.getenv() fallbacks, class attributes, constructor args, dict contents, and **kwargs unpacking
  3. Looks up pricing from a local cache (sourced from LiteLLM, 2200+ models)
  4. For diff mode: compares calls between two git refs and computes the cost delta
  5. Outputs a cost report as a table, JSON, or GitHub PR comment

CLI Reference

tokentoll scan [PATH...] [--format table|json|markdown] [--calls-per-month N]
tokentoll diff [REF] [--base REF] [--head REF] [--format table|json|markdown|github-comment]
tokentoll update    # Update bundled pricing data

Pricing Data

Pricing is bundled and works offline. To update to the latest prices:

tokentoll update

Pricing data is sourced from LiteLLM's model_prices_and_context_window.json and covers 300+ models across OpenAI, Anthropic, Google, AWS Bedrock, Azure, and more.

Smart Variable Resolution

Real codebases rarely pass model names as string literals. tokentoll's multi-pass constant propagation engine follows:

DEFAULT_MODEL = os.getenv("MODEL", "gpt-4o")

class Config:
    model: str = DEFAULT_MODEL

config = Config()
kwargs = {"model": config.model, "max_tokens": 2000}
client.chat.completions.create(**kwargs)
# tokentoll resolves: model="gpt-4o", max_tokens=2000
  • Variable assignments (MODEL = "gpt-4o")
  • os.getenv() / os.environ.get() fallback values
  • Function default parameters
  • Class attribute defaults
  • Constructor argument propagation
  • Dict literal and subscript contents
  • **kwargs unpacking

Limitations

  • Cannot resolve models loaded from external config files or databases at runtime (these are flagged as dynamic but not priced)
  • Token estimates use a character/4 heuristic unless tiktoken is installed
  • Monthly estimates assume uniform call volume (configurable via --calls-per-month)
  • Python only for now (JS/TS support planned)

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tokentoll-0.4.0.tar.gz (66.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tokentoll-0.4.0-py3-none-any.whl (63.6 kB view details)

Uploaded Python 3

File details

Details for the file tokentoll-0.4.0.tar.gz.

File metadata

  • Download URL: tokentoll-0.4.0.tar.gz
  • Upload date:
  • Size: 66.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for tokentoll-0.4.0.tar.gz
Algorithm Hash digest
SHA256 af7fea3b186dfc993b8cde5dfa5db9aaf9dc58da69f479170e126985e1b496cf
MD5 1c5606075ae895fad5e978c493858275
BLAKE2b-256 77b9ee6386ba412e7eab88383d0b2b670a8f5e47677a4f682101d2b297a0a268

See more details on using hashes here.

File details

Details for the file tokentoll-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: tokentoll-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 63.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.20

File hashes

Hashes for tokentoll-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e2207b33701c69291f91e400bf1afd938bb88355fd968e4e5bf48f3cbd3046c0
MD5 97113677724c08613acbb60572ea168a
BLAKE2b-256 0552a0515b7a98abb99b7557d9647282f1a795e42692eeb81c679c1688ff4abd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page