Skip to main content

A package to track LLM token usage in context

Project description

llm-token-tracker

PyPI version

A Python package to track token usage in LLM interactions.

Features

  • Track token usage in LLM conversations
  • Support for detailed token breakdowns (prompt, completion, reasoning, cached, etc.)
  • Cost estimation for input and output tokens
  • Configurable verbosity levels for logging
  • Integration with custom loggers
  • Compatibility with xAI and OpenAI models

Installation

pip install llm-token-tracker

Usage

from llm_token_tracker import wrap_llm
from xai_sdk import Client
import logging

# Create and wrap a chat for token tracking
client = Client()
chat = client.chat.create(model="grok-3")
wrapped_chat = wrap_llm(chat)

response = wrapped_chat.sample("Hello, how are you?")
print(response.content)
# Console will log: Total tokens used in context: X

# For conversation context
wrapped_chat.append(system("You are Grok, a highly intelligent AI."))
wrapped_chat.append(user("What is the meaning of life?"))
response = wrapped_chat.sample()
print(response.content)

OpenAI Compatibility

For OpenAI models using the responses API:

from openai import OpenAI
from llm_token_tracker import wrap_llm

client = OpenAI()
wrapped_client = wrap_llm(client, provider="openai")

response = wrapped_client.responses.create(
    model="gpt-4.1",
    input="Tell me a three sentence bedtime story about a unicorn."
)
print(response.content)
# Console will log: Total tokens used in context: X

Configuration Options

wrap_llm accepts several parameters to customize logging:

  • provider: "xai" (default) or "openai" to specify the LLM provider.
  • verbosity: "minimum" (default), "detailed", or "max"
    • "minimum": Logs only total tokens used.
    • "detailed": Logs a detailed usage summary.
    • "max": Logs the full history of all token usages.
  • logger: Optional logging.Logger instance. If provided, uses the logger instead of printing to console.
  • log_level: Logging level (default logging.INFO).
  • quiet: If True, disables all logging.
  • max_tokens: Maximum tokens allowed in context (default 132000).
  • input_pricing: Price per 1 million input tokens (default 0.2).
  • output_pricing: Price per 1 million output tokens (default 0.5).
  • calculate_pricing: If True, calculates and logs cost estimates (default False).

Example with custom logger:

import logging

logger = logging.getLogger("my_llm_logger")
logger.setLevel(logging.INFO)
handler = logging.StreamHandler()
logger.addHandler(handler)

wrapped_chat = wrap_llm(chat, logger=logger, verbosity="detailed")

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_token_tracker-0.4.0.tar.gz (6.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_token_tracker-0.4.0-py3-none-any.whl (6.3 kB view details)

Uploaded Python 3

File details

Details for the file llm_token_tracker-0.4.0.tar.gz.

File metadata

  • Download URL: llm_token_tracker-0.4.0.tar.gz
  • Upload date:
  • Size: 6.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_token_tracker-0.4.0.tar.gz
Algorithm Hash digest
SHA256 8482ff780199d0c7bf468b618ee05e76373506ef247de922aba64dea2a03d779
MD5 f0289045acb5b8394d149477dfae3040
BLAKE2b-256 012bc0e74da9f0ce96ce5d3d29b0ebe57061481eca0cbd640dcbdd35b7b957aa

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_token_tracker-0.4.0.tar.gz:

Publisher: python-publish.yml on LaurinLW/llm-token-tracker

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_token_tracker-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_token_tracker-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 56d8e34df36b232829d8cb120eee0541d413e2e5b9452ca7a0a9ce70727edf23
MD5 9c572d163c2b5462dc87cc85fbcb2689
BLAKE2b-256 5e554327a778ce1c47078f9ebf99532a32fa754c6f484765f4643464fef9c610

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_token_tracker-0.4.0-py3-none-any.whl:

Publisher: python-publish.yml on LaurinLW/llm-token-tracker

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page