A short description of your package
Project description
a2a-llm-tracker
Track LLM usage and costs across providers (OpenAI, Gemini, Anthropic, etc.) from a single place.
Installation
pip install a2a-llm-tracker
Quick Start (Recommended Pattern)
For applications making multiple LLM calls, use a singleton pattern to initialize once and reuse everywhere.
Step 1: Create a tracking module
Create tracking.py in your project:
# tracking.py
import os
import asyncio
import concurrent.futures
from a2a_llm_tracker import init
_meter = None
def get_meter():
"""Get or initialize the global meter singleton."""
global _meter
if _meter is None:
try:
client_id = os.getenv("CLIENT_ID", "")
client_secret = os.getenv("CLIENT_SECRET", "")
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run,
init(client_id, client_secret, "my-app")
)
_meter = future.result(timeout=5)
except Exception as e:
print(f"LLM tracking initialization failed: {e}")
return None
return _meter
Step 2: Use it anywhere
from openai import OpenAI
from a2a_llm_tracker import analyze_response, ResponseType
from tracking import get_meter
def call_openai(prompt: str):
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}],
)
# Track usage
meter = get_meter()
if meter:
analyze_response(response, ResponseType.OPENAI, meter)
return response
Environment Variables
Set your CCS credentials:
export CLIENT_ID=your_client_id
export CLIENT_SECRET=your_client_secret
export OPENAI_API_KEY=sk-xxxxx
Query Total Usage & Costs
Retrieve your accumulated costs and token usage from CCS:
import os
import asyncio
from a2a_llm_tracker import init
from a2a_llm_tracker.sources import CCSSource
async def get_total_usage():
client_id = os.getenv("CLIENT_ID")
client_secret = os.getenv("CLIENT_SECRET")
await init(
client_id=client_id,
client_secret=client_secret,
application_name="my-app",
)
source = CCSSource(int(client_id))
total_cost = await source.count_cost()
total_tokens = await source.count_total_tokens()
print(f"Total cost: ${total_cost:.4f}")
print(f"Total tokens: {total_tokens}")
asyncio.run(get_total_usage())
Supported Providers
| Provider | ResponseType |
|---|---|
| OpenAI | ResponseType.OPENAI |
| Google Gemini | ResponseType.GEMINI |
| Anthropic | ResponseType.ANTHROPIC |
| Cohere | ResponseType.COHERE |
| Mistral | ResponseType.MISTRAL |
| Groq | ResponseType.GROQ |
| Together AI | ResponseType.TOGETHER |
| AWS Bedrock | ResponseType.BEDROCK |
| Google Vertex AI | ResponseType.VERTEX |
Documentation
- LiteLLM Wrapper - Auto-tracking via LiteLLM
- CCS Integration - Centralized tracking setup
- Response Analysis - Direct SDK tracking
- Pricing - Custom pricing configuration
- Building - Development and publishing
What This Package Does NOT Do
- Guess exact billing from raw text
- Replace provider SDKs
- Upload data anywhere automatically
- Require a backend or SaaS
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file a2a_llm_tracker-0.0.10.tar.gz.
File metadata
- Download URL: a2a_llm_tracker-0.0.10.tar.gz
- Upload date:
- Size: 31.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
42a61127f7cbf097b7d1bd11d129c6db5ebed1f50d683502ff9b4156440cf29a
|
|
| MD5 |
c66395969cc31d54f22515c9bbb0bfde
|
|
| BLAKE2b-256 |
e767277f89e4f03212c33ca9a79cf76b9a6a7a33ea62d877f72cdbb29a1ec696
|
File details
Details for the file a2a_llm_tracker-0.0.10-py3-none-any.whl.
File metadata
- Download URL: a2a_llm_tracker-0.0.10-py3-none-any.whl
- Upload date:
- Size: 35.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
df8e53787853a476569b18abd89291d1f42551cd94aeccd7965eee5c94661b4d
|
|
| MD5 |
2d4f1d8f90443807b6536e766df56ec9
|
|
| BLAKE2b-256 |
dbe56f4c45973a412eda27da2d9c7d51aa880953b503b5b11f043a3afc0230a0
|