Skip to main content

Compare LLM API costs across providers from the command line

Project description

llm-cost

Compare LLM API prices across providers from the command line.

PyPI version Python License: MIT

Find the cheapest model for your prompt in seconds

$ llm-cost calc "Summarize this article for me" --output 500
╭── Cost estimate · 7 input + 500 output ──────────────────────────────────────╮
│  #  Provider      Model                Total cost    vs cheapest              │
│  1  Mistral AI    Mistral Small 3.2    $0.000090     cheapest                 │
│  2  DeepSeek      DeepSeek V4 Flash    $0.000141     1.6x                     │
│  3  Google        Gemini 2.5 Flash-L   $0.000200     2.2x                     │
│  4  xAI           Grok 4.1 Fast        $0.000251     2.8x                     │
│  5  OpenAI        GPT-5.4 Nano         $0.000626     7.0x                     │
│  6  Anthropic     Claude Haiku 4.5     $0.002507     27.9x                    │
│  7  Google        Gemini 3.1 Pro       $0.006014     66.8x                    │
│  8  OpenAI        GPT-5.5              $0.015035    167.1x                    │
╰──────────────────────────────────────────────────────────────────────────────╯

  Cheapest: Mistral Small 3.2 (Mistral AI) — $0.000090

Install

pip install llm-cost

For accurate token counting (uses tiktoken):

pip install "llm-cost[tiktoken]"

Usage

List all models

llm-cost list

Filter by provider:

llm-cost list --provider anthropic
llm-cost list --provider openai

Sort options (input, output, context, name):

llm-cost list --sort output

Search by name:

llm-cost list --search gpt-5
llm-cost list --search gemini

Calculate cost for a prompt

# Auto-estimate tokens from text
llm-cost calc "Write me a blog post about AI pricing" --output 800

# Specify tokens directly
llm-cost calc --input 4000 --output 1000

# Top 5 cheapest only
llm-cost calc --input 10000 --output 2000 --top 5

# Filter to one provider
llm-cost calc "My prompt" --output 500 --provider google

# One specific model
llm-cost calc "My prompt" --output 500 --model gpt-5-5

Compare specific models

# Latest flagships head-to-head
llm-cost compare gpt-5-5 claude-opus-4-7 gemini-3-1-pro

# Mid-tier sweet spot
llm-cost compare gpt-5-4 claude-sonnet-4-6 gemini-3-flash --input 5000 --output 1000

# Budget tier
llm-cost compare gpt-5-4-nano deepseek-v4-flash grok-4-1-fast mistral-small-3-2

# New agentic models
llm-cost compare deepseek-v4-pro glm-5-1 kimi-k2-6 minimax-m2-7 --input 5000 --output 1000

# From a real prompt
llm-cost compare gpt-5-5 claude-opus-4-7 --prompt "Explain how transformers work"

List providers

llm-cost providers

Supported models (May 2026)

Prices in USD per 1M tokens.

Provider Model Input Output Context
OpenAI GPT-5.5 $5.00 $30.00 1M
OpenAI GPT-5.5 Pro $30.00 $180.00 1M
OpenAI GPT-5.4 $2.50 $15.00 1.05M
OpenAI GPT-5.4 Mini $0.75 $4.50 400K
OpenAI GPT-5.4 Nano $0.20 $1.25 200K
OpenAI GPT-5 $1.25 $10.00 400K
OpenAI o3 $10.00 $40.00 200K
OpenAI o4 Mini $1.10 $4.40 200K
Anthropic Claude Opus 4.7 $5.00 $25.00 1M
Anthropic Claude Opus 4.6 $5.00 $25.00 1M
Anthropic Claude Sonnet 4.6 $3.00 $15.00 1M
Anthropic Claude Haiku 4.5 $1.00 $5.00 200K
Google Gemini 3.1 Pro $2.00 $12.00 1M
Google Gemini 3 Flash $0.50 $3.00 1M
Google Gemini 2.5 Pro $1.25 $10.00 1M
Google Gemini 2.5 Flash $0.30 $2.50 1M
Google Gemini 2.5 Flash-Lite $0.10 $0.40 1M
xAI Grok 4 $3.00 $15.00 2M
xAI Grok 4.1 Fast $0.20 $0.50 2M
DeepSeek DeepSeek V4 Flash $0.14 $0.28 1M
DeepSeek DeepSeek V4 Pro $1.74 $3.48 1M
DeepSeek DeepSeek R1 $0.55 $2.19 1M
Z.AI GLM-5.1 $1.40 $4.40 200K
Kimi Kimi K2.6 $0.95 $4.00 256K
MiniMax MiniMax M2.7 $0.30 $1.20 197K
Mistral AI Mistral Large 3 $0.50 $1.50 256K
Mistral AI Mistral Medium 3.5 $1.00 $3.00 256K
Mistral AI Mistral Small 3.2 $0.06 $0.18 131K
Meta Llama 4 Maverick $0.27 $0.85 1M
Meta Llama 3.3 70B $0.59 $0.79 128K
Cohere Command R+ $3.00 $15.00 128K
Cohere Command R7B $0.04 $0.15 128K

Notes:

  • DeepSeek V4 has two API variants: deepseek-v4-flash and deepseek-v4-pro.
  • Cached-input, batch, promotional, long-context, and subscription-plan discounts are not included in the main table.

Price sources:

33 models across 11 providers. Prices stored in llm_cost/data/prices.yaml — PRs to update them are always welcome!


Token counting

Default: word-based heuristic — zero extra dependencies. For accurate counts:

pip install "llm-cost[tiktoken]"

Contributing

The easiest contribution is updating llm_cost/data/prices.yaml when a provider changes their pricing. Each entry is just 4 fields:

my-new-model:
  name: My New Model
  input: 1.50      # $ per 1M input tokens
  output: 6.00     # $ per 1M output tokens
  context: 200000  # context window in tokens
git clone https://github.com/madeburo/llmcost
cd llm-cost
pip install -e ".[dev]"
pytest

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmprices-1.1.0.tar.gz (13.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmprices-1.1.0-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file llmprices-1.1.0.tar.gz.

File metadata

  • Download URL: llmprices-1.1.0.tar.gz
  • Upload date:
  • Size: 13.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for llmprices-1.1.0.tar.gz
Algorithm Hash digest
SHA256 2b4663079d8596b953f84b4f957e84edb1bccc031d3eb7af838d57f133d6316e
MD5 3c53d4917b015d8b897bfaa20fbcc3ac
BLAKE2b-256 487422ce5ad5bae44a6ddfa47fded760c43b6b3837d1d9882fe9e3d7f8e14626

See more details on using hashes here.

File details

Details for the file llmprices-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: llmprices-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for llmprices-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 9d74d617b69657ce0f4ab652c1d23027ff8e585b5aa43419bab48966bbdc9d91
MD5 24270d10cd8ac755111ec8ed4919c0df
BLAKE2b-256 f68c07366a806ea2defa392f19729e83aa9e193153d53d95eae0581c623aac29

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page