Skip to main content

This package helps you track your llm costs

Project description

a2a-llm-tracker

Track LLM usage and costs across providers (OpenAI, Gemini, Anthropic, etc.) from a single place.

Installation

pip install a2a-llm-tracker

Quick Start (Recommended Pattern)

For applications making multiple LLM calls, use a singleton pattern to initialize once and reuse everywhere.

Step 1: Create a tracking module

Create tracking.py in your project:

# tracking.py
import os
import asyncio
import concurrent.futures
from a2a_llm_tracker import init

_meter = None

def get_meter():
    """Get or initialize the global meter singleton."""
    global _meter
    if _meter is None:
        try:
            client_id = os.getenv("CLIENT_ID", "")
            client_secret = os.getenv("CLIENT_SECRET", "")

            with concurrent.futures.ThreadPoolExecutor() as executor:
                future = executor.submit(
                    asyncio.run,
                    init(client_id, client_secret, "my-app")
                )
                _meter = future.result(timeout=5)

        except Exception as e:
            print(f"LLM tracking initialization failed: {e}")
            return None
    return _meter

Step 2: Use it anywhere

from openai import OpenAI
from a2a_llm_tracker import analyze_response, ResponseType
from tracking import get_meter

def call_openai(prompt: str):
    client = OpenAI()
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
    )

    # Track usage
    meter = get_meter()
    if meter:
        analyze_response(response, ResponseType.OPENAI, meter)

    return response

Environment Variables

Set your CCS credentials:

export CLIENT_ID=your_client_id
export CLIENT_SECRET=your_client_secret
export OPENAI_API_KEY=sk-xxxxx

Query Total Usage & Costs

Retrieve your accumulated costs and token usage from CCS:

import os
import asyncio
from a2a_llm_tracker import init
from a2a_llm_tracker.sources import CCSSource

async def get_total_usage():
    client_id = os.getenv("CLIENT_ID")
    client_secret = os.getenv("CLIENT_SECRET")

    await init(
        client_id=client_id,
        client_secret=client_secret,
        application_name="my-app",
    )

    source = CCSSource(int(client_id))
    total_cost = await source.count_cost()
    total_tokens = await source.count_total_tokens()

    print(f"Total cost: ${total_cost:.4f}")
    print(f"Total tokens: {total_tokens}")

asyncio.run(get_total_usage())

Supported Providers

Provider ResponseType
OpenAI ResponseType.OPENAI
Google Gemini ResponseType.GEMINI
Anthropic ResponseType.ANTHROPIC
Cohere ResponseType.COHERE
Mistral ResponseType.MISTRAL
Groq ResponseType.GROQ
Together AI ResponseType.TOGETHER
AWS Bedrock ResponseType.BEDROCK
Google Vertex AI ResponseType.VERTEX

Documentation

Full documentation available on GitHub:

What This Package Does NOT Do

  • Guess exact billing from raw text
  • Replace provider SDKs
  • Upload data anywhere automatically
  • Require a backend or SaaS

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

a2a_llm_tracker-0.0.11.tar.gz (31.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

a2a_llm_tracker-0.0.11-py3-none-any.whl (35.4 kB view details)

Uploaded Python 3

File details

Details for the file a2a_llm_tracker-0.0.11.tar.gz.

File metadata

  • Download URL: a2a_llm_tracker-0.0.11.tar.gz
  • Upload date:
  • Size: 31.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.2

File hashes

Hashes for a2a_llm_tracker-0.0.11.tar.gz
Algorithm Hash digest
SHA256 89249575215c58ed175fc25075b57215176260363d2fad3c09ba8e51af329357
MD5 2f67a4a432b4ebbdd8b0852194c246cf
BLAKE2b-256 1e18351b74adc618b61326aa8608d6f5ef7386cd34d553013cab707c522969cd

See more details on using hashes here.

File details

Details for the file a2a_llm_tracker-0.0.11-py3-none-any.whl.

File metadata

File hashes

Hashes for a2a_llm_tracker-0.0.11-py3-none-any.whl
Algorithm Hash digest
SHA256 931c4a00d1797b3d1b36ce27bdc2d138f73873b25d7dcaa4737ffe0ed301c554
MD5 e6f9870870a819ac2c3c1da996d674f6
BLAKE2b-256 734b1443dfbf4c71d8c017bb4ac37ae03effcc3cf540ded9d2758f4db58671cc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page