Skip to main content

Add your description here

Project description

ai-tokentrace

PyPI version CI

GenAI Cost Observability for Google's Generative AI.

ai-tokentrace provides a transparent and easy way to track token consumption in your GenAI applications. Whether you're using the standard google-genai SDK or building complex agents with the Google Agent Development Kit (ADK), this library helps you manage costs, optimize performance, and gain deep insights into your model usage.

Features

  • 🔍 Automatic Tracking: Seamlessly integrates with google-genai to capture token usage from every API call.
  • 🤖 ADK Support: Includes a plugin for the Google Agent Development Kit for effortless agent monitoring.
  • 🔌 Multiple Backends: Export data to where you need it:
    • Logging: Simple standard output for development.
    • JSONL: Structured local files for easy analysis.
    • Google Cloud Firestore: Scalable, queryable cloud storage.
    • Google Cloud Pub/Sub: Event-driven pipelines for real-time analytics.
  • ⚡ Async Native: Fully non-blocking to keep your applications fast.
  • 📊 Rich Metrics: Tracks input/output tokens, thinking tokens, cached content, tool usage, and more.

Installation

Install using pip or uv (recommended).

Basic Installation

For standard logging or JSONL export:

pip install ai-tokentrace
# or
uv pip install ai-tokentrace

With Extra Backends

Install with specific extras for Cloud integrations or ADK support:

# For Google Cloud Firestore
uv pip install "ai-tokentrace[firestore]"

# For Google Cloud Pub/Sub
uv pip install "ai-tokentrace[pubsub]"

# For Google ADK support
uv pip install "ai-tokentrace[adk]"

# Install everything
uv pip install "ai-tokentrace[firestore,pubsub,adk]"

Quick Start

1. Using with google-genai SDK

Simply wrap your client with TrackedGenaiClient. It works exactly like the standard client but logs all token usage.

import os
from google import genai
from ai_tokentrace import TrackedGenaiClient

# 1. Initialize standard client
client = genai.Client(api_key=os.environ["GEMINI_API_KEY"])

# 2. Wrap with tracking (uses logging by default)
tracked_client = TrackedGenaiClient(client=client)

# 3. Use as normal!
response = tracked_client.models.generate_content(
    model="gemini-2.5-flash",
    contents="Explain quantum computing in 5 words."
)
print(response.text)
# Output: "Complex superposition processes information fast."
# Log: {"timestamp": "...", "model_name": "gemini-2.5-flash", "total_tokens": 15, ...}

2. Using with Google ADK

Add the TokenTrackingPlugin to your ADK app.

from google.adk.agents import LlmAgent
from google.adk.apps.app import App
from ai_tokentrace.adk import TokenTrackingPlugin

agent = LlmAgent(model="gemini-2.5-flash", ...)

app = App(
    name="my_app",
    root_agent=agent,
    plugins=[TokenTrackingPlugin()]  # Tracks all agent interactions
)

Advanced Usage

Configuring Backends

You can configure different backends for storing your token usage data.

Firestore Example:

from ai_tokentrace import TrackedGenaiClient
from ai_tokentrace.services import FirestoreTokenUsageService

service = FirestoreTokenUsageService(collection_name="genai_usage_logs")
tracked_client = TrackedGenaiClient(client=client, service=service)

Pub/Sub Example:

from ai_tokentrace import TrackedGenaiClient
from ai_tokentrace.services import PubSubTokenUsageService

service = PubSubTokenUsageService(topic_id="my-usage-topic", project_id="my-project")
tracked_client = TrackedGenaiClient(client=client, service=service)

Self-Inspection for Agents

Give your agents the ability to see their own token usage!

from ai_tokentrace.services import FirestoreTokenUsageService

service = FirestoreTokenUsageService(...)

# Add the inspection tool to your agent
agent = LlmAgent(
    ...,
    tools=[service.get_inspection_tool()]
)

Examples

Check out the examples/ directory for complete, runnable projects:

  • google-genai/: Scripts demonstrating sync/async usage, streaming, and different backends.
  • adk/: Full ADK applications showing multi-agent tracking, multimodal capabilities, and self-inspection.

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

License

Apache 2.0 - See LICENSE for more details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_tokentrace-0.2.1.tar.gz (11.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_tokentrace-0.2.1-py3-none-any.whl (15.6 kB view details)

Uploaded Python 3

File details

Details for the file ai_tokentrace-0.2.1.tar.gz.

File metadata

  • Download URL: ai_tokentrace-0.2.1.tar.gz
  • Upload date:
  • Size: 11.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for ai_tokentrace-0.2.1.tar.gz
Algorithm Hash digest
SHA256 63a61d38669f36484ee10e24ee910999a8667bbc4f1e57f709d1d278d01139b7
MD5 475206936b4cb518fef72f851b198351
BLAKE2b-256 2eebdd5c87e92fc9156e85665ad54dc1be97bab74b8232175ed9fb9ac82fe510

See more details on using hashes here.

File details

Details for the file ai_tokentrace-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_tokentrace-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 298743659efb75a14ae4b1b288d9422876706481f740a6eecec3463f61c3f871
MD5 0aa39228703059b87b5ba13c0da73aa1
BLAKE2b-256 498682c9b65feda10d58f481030553f04ee474004a0d65442837e886889fbab4

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page