Zero-config AI cost calculator per commit/model with liteLLM
Project description
AI Cost Tracker
AI Cost Tracking
- 🤖 LLM usage: $0.8077 (47 commits)
- 👤 Human dev: ~$621 (6.2h @ $100/h, 30min dedup)
Generated on 2026-03-29 using openrouter/qwen/qwen3-coder-next
Track AI usage costs across your git commits with three flexible usage modes - no initial configuration required.
Features
- liteLLM Integration - Support for 100+ AI providers via liteLLM
- Default: Qwen3 Coder Next - Pre-configured with openrouter/qwen/qwen3-coder-next
- Zero Config - Works out of the box, reads from
.envfile - Smart Token Estimation - Accurate cost calculation using tiktoken (OpenAI) and Anthropic tokenizer
- ROI Calculation - Track value generated vs AI costs
- Date Filtering - Analyze specific days, date ranges, or full history
- Auto Badges - Automatically generate and update cost badges in README
- Rich Reports - Markdown and HTML reports with visualizations
- All Commits Support - Analyze all commits with
--allflag (not just AI-tagged)
Tokenization
The tool uses accurate tokenization for precise cost estimation:
| Model | Tokenizer | Accuracy |
|---|---|---|
| OpenAI GPT-4/4o | tiktoken (cl100k_base) | ~99% |
| Anthropic Claude | Anthropic SDK (local) | 100% |
| OpenRouter/Other | tiktoken fallback | ~95% |
Token Counting Examples
from costs.tokenizers import count_tokens, Tokenizer
# Count tokens for any model
text = "def hello(): print('world')"
tokens = count_tokens(text, "claude-3.5-sonnet") # 10 tokens
tokens = count_tokens(text, "gpt-4o") # 10 tokens
# Use tokenizer directly
tokenizer = Tokenizer()
input_tokens = tokenizer.count_tokens(prompt, model)
Installation
pip install costs
Quick Start
1. Initialize Configuration
costs init
# Edit .env file to add your OpenRouter API key
echo "OPENROUTER_API_KEY=YOUR_KEY" >> .env
2. Run Analysis
# Uses defaults from .env (Qwen3 Coder Next)
costs analyze --repo .
# Or specify directly
costs analyze --repo . --model openrouter/qwen/qwen3-coder-next --api-key YOUR_KEY
# Analyze all commits (not just AI-tagged)
costs analyze --repo . --all
Configuration
Create a .env file in your project root:
# Required: OpenRouter API key (https://openrouter.ai/keys)
OPENROUTER_API_KEY=YOUR_KEY
LLM_MODEL=openrouter/qwen/qwen3-coder-next
Or use the built-in init command:
costs init
Three Usage Options (Zero Config Required)
Option 1: BYOK (Bring Your Own Key) - Free
Use your own API key via OpenRouter. Costs calculated locally with real provider pricing.
# With OpenRouter key (default from .env)
costs analyze --repo .
# Explicit key
costs analyze --repo . --api-key YOUR_KEY
Supported models via liteLLM:
openrouter/qwen/qwen3-coder-next(default)anthropic/claude-4-sonnetanthropic/claude-3.5-sonnetanthropic/claude-3.5-haikuopenai/gpt-4oopenai/gpt-4o-mini- 100+ more via liteLLM
Option 2: Local/Ollama - Zero API Costs
No API key needed. Estimates based on diff size using local pricing.
costs --repo . --mode local
Estimation formula: diff_chars / 4 * 0.0001$/M tokens
Date Filtering
Analyze commits for specific time periods:
# Analyze specific day
costs analyze --repo . --date 2024-03-15
# Analyze date range
costs analyze --repo . --since 2024-01-01 --until 2024-03-31
# Analyze all commits since repository creation
costs analyze --repo . --full-history
# Analyze only AI-tagged commits (default)
costs analyze --repo . --ai-only
# Analyze all commits (not just AI-tagged)
costs analyze --repo . --all
Badge Generation
Generate and update cost badges in your README:
# Generate badge based on pyproject.toml configuration (AI commits only)
costs auto-badge --repo .
# Generate badge for all commits (not just AI-tagged)
costs auto-badge --repo . --all
# Manual badge generation
costs badge --repo . --model openrouter/qwen/qwen3-coder-next
# Manual badge for all commits
costs badge --repo . --all
This adds a badge section to README showing total cost, AI commits, and model used.
Report Generation
# Generate markdown report with charts
costs report --repo . --format markdown
# Generate HTML report
costs report --repo . --format html
# Generate both and update README
costs report --repo . --format both --update-readme
Python API
Use the calculator directly in your code:
from costs.calculator import ai_cost, estimate_tokens, calculate_cost
# Calculate complete cost with ROI
result = ai_cost(commit_diff, model="claude-3.5-sonnet")
print(f"Cost: {result['cost_formatted']}")
print(f"Tokens: {result['tokens']['total']}")
print(f"ROI: {result['roi_formatted']}")
# Estimate tokens only
tokens = estimate_tokens(diff, model="gpt-4o")
print(f"Input: {tokens['input']}, Output: {tokens['output']}")
# Calculate cost from tokens
cost = calculate_cost(tokens, "openrouter/qwen/qwen3-coder-next")
See examples/ directory for more usage patterns.
How It Works
- Parse git history - Analyzes commits with optional
[ai:model]tags - Estimate tokens - Uses tiktoken for OpenAI models, Anthropic tokenizer for Claude, with accurate diff parsing
- Calculate cost - Multiplies tokens × model price
- Generate ROI - Estimates time saved (100 LOC/h × $100/h)
By default, only commits with [ai:] tags are analyzed. Use --all to analyze all commits.
Why liteLLM?
- Universal API - Jedna składnia dla 100+ providerów
- Automatic routing - Fallback między providerami
- Cost tracking - Wbudowane liczenie tokenów
- OpenRouter - Dostęp do najnowszych modeli bez kont premium
Option 3: SaaS Subscription - Managed
Enterprise managed solution with dashboard and invoicing.
costs --repo . --saas-token PLACEHOLDER
Usage Examples
# Initialize .env config
costs init
# Analyze last 50 commits (uses .env defaults)
costs analyze --repo . -n 50
# Use specific model via liteLLM
costs analyze --repo . --model anthropic/claude-3.5-sonnet
# Analyze all commits (not just AI-tagged)
costs analyze --repo . --all
# Analyze with date filtering
costs analyze --repo . --since 2024-01-01 --until 2024-03-31
# Export to custom file
costs analyze --repo . --output my_costs.csv
# Show repository statistics
costs stats --repo .
# Generate reports
costs report --repo . --format both --update-readme
# Generate badge for all commits
costs badge --repo . --all
# Auto-badge with pyproject.toml config
costs auto-badge --repo . --all
# Estimate single diff
costs estimate my_changes.patch
# Read diff from stdin
git diff HEAD~1 | costs estimate -
Tagging AI Commits
Tag commits with [ai:model] for automatic tracking:
git commit -m "[ai:openrouter/qwen/qwen3-coder-next] Refactor authentication"
git commit -m "[ai:anthropic/claude-3.5-sonnet] Add payment integration"
Sample Output
🔍 Analyzing 100 commits from my-project...
🤖 Model: openrouter/qwen/qwen3-coder-next | Mode: byok
==================================================
📊 AI COST ANALYSIS - openrouter/qwen/qwen3-coder-next
==================================================
Commits analyzed: 42
Total cost: $12.34
Hours saved: 15.3h
Value generated: $1530.00
ROI: 124x
==================================================
📁 Results saved to: ai_costs.csv
💡 Recent AI commits:
a1b2c3d4 | $0.32 | [ai:qwen3-coder-next] Refactor...
e5f6g7h8 | $0.45 | [ai:qwen3-coder-next] Add feature...
CSV Export Format
| Column | Description |
|---|---|
commit_hash |
Short commit SHA |
commit_message |
Full commit message |
author |
Commit author name |
date |
ISO format datetime |
cost |
Calculated cost in USD |
cost_formatted |
Formatted cost string |
model |
AI model used |
mode |
Calculation mode (byok/local/saas) |
tokens_input |
Estimated input tokens |
tokens_output |
Estimated output tokens |
hours_saved |
Estimated hours saved |
roi |
ROI multiplier |
Pricing Reference
| Model | Input | Output | Tokenizer |
|---|---|---|---|
| anthropic/claude-4-sonnet | $3/M | $15/M | Anthropic |
| anthropic/claude-3.5-sonnet | $3/M | $15/M | Anthropic |
| anthropic/claude-3.5-haiku | $0.8/M | $4/M | Anthropic |
| openai/gpt-4o | $5/M | $15/M | tiktoken |
| openai/gpt-4o-mini | $0.15/M | $0.6/M | tiktoken |
| openrouter/qwen/qwen3-coder-next | $0.50/M | $1.50/M | tiktoken |
| ollama/* | ~$0.0001/M | ~$0.0001/M | tiktoken |
Business Model
| Tier | Price | Features |
|---|---|---|
| BYOK | Free | Use your own OpenRouter API key |
| SaaS | $9/month | Unlimited, managed keys, dashboard, EU invoicing |
Development
# Install with poetry
poetry install
# Run CLI
poetry run costs analyze --repo ..
# Publish to PyPI
poetry publish --build
PHP Badge Service
Standalone PHP service for generating badges:
cd services/badge-service
composer install
php -S localhost:8080
Generate badges via API:
curl "http://localhost:8080/badge.php?cost=12.34&model=claude-4&commits=42"
Automatic Cost Calculation
The tool can automatically calculate costs and update badges on every commit and during test runs.
Pre-commit Hook
Install the pre-commit hook to automatically update the badge before each commit:
# Copy hook to git hooks
cp hooks/pre-commit .git/hooks/pre-commit
chmod +x .git/hooks/pre-commit
# Or use project.sh (includes hook installation)
bash project.sh
The hook will:
- Detect
costsin global PATH or virtualenv - Run
costs auto-badgeif[tool.costs]is configured inpyproject.toml - Stage updated README.md (interactive prompt in terminal)
Pytest Integration
Tests automatically validate the cost calculation pipeline:
# Run all tests including auto-badge test
pytest tests/test_cost.py -v
# Test will:
# - Check if [tool.costs] is configured
# - Run costs auto-badge
# - Verify badge was updated
GitHub Actions
The repository includes a workflow that runs on push/PR:
# .github/workflows/ai-cost-badge.yml
# Automatically updates badge on main branch
CLI Commands
| Command | Description | Key Options |
|---|---|---|
costs init |
Initialize .env configuration |
--force - overwrite existing |
costs analyze |
Analyze repository commits | --repo, --model, --api-key, --all, --since, --until, --date, --full-history, --max-commits, --output |
costs stats |
Show repository statistics | --repo |
costs report |
Generate markdown/HTML reports | --repo, --model, --format, --output, --update-readme |
costs badge |
Generate cost badge | --repo, --model, --all |
costs auto-badge |
Auto-generate badge from pyproject.toml | --repo, --all |
costs estimate |
Estimate cost for single diff | --model |
📖 Automatic Badge Generation: See docs/AUTO_BADGE.md for GitHub Actions, pre-commit hooks, and CI/CD integration.
Environment Variables
| Variable | Description | Default |
|---|---|---|
OPENROUTER_API_KEY |
OpenRouter API key | (required for BYOK) |
LLM_MODEL |
Default model for calculations | openrouter/qwen/qwen3-coder-next |
License
Licensed under Apache-2.0.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file costs-0.1.48.tar.gz.
File metadata
- Download URL: costs-0.1.48.tar.gz
- Upload date:
- Size: 29.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
95204bd46d927e2d82573959ff562b1af47c40bdeb4e7ae18901b81cdf579207
|
|
| MD5 |
d2e407eab701a6a6065051b2f949012b
|
|
| BLAKE2b-256 |
8d0593be4f71e01d3db5b18b4aabb3a9e826a8166c82cc98b6ff5d18bab8e15d
|
File details
Details for the file costs-0.1.48-py3-none-any.whl.
File metadata
- Download URL: costs-0.1.48-py3-none-any.whl
- Upload date:
- Size: 33.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2607ad71c01994013a835dcef641a835922fdd9bdcab185eb398d07406ac2181
|
|
| MD5 |
96c999e062d34957b13e7424ed68b93c
|
|
| BLAKE2b-256 |
2685be4684cff92c2cf43b8a628da8741dc6cbb97d2bad9809ebba836fb37599
|