Skip to main content

LangChain callback handler for automatic usage tracking and billing with AgentBill

Project description

AgentBill LangChain Integration

Automatic usage tracking and billing for LangChain applications.

PyPI version License: MIT

Installation

Install via pip:

pip install agentbill-langchain

With OpenAI support:

pip install agentbill-langchain[openai]

With Anthropic support:

pip install agentbill-langchain[anthropic]

Quick Start

from agentbill_langchain import AgentBillCallback
from langchain_openai import ChatOpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate

# 1. Initialize AgentBill callback
callback = AgentBillCallback(
    api_key="agb_your_api_key_here",  # Get from AgentBill dashboard
    base_url="https://api.agentbill.io",
    customer_id="customer-123",
    debug=True
)

# 2. Create LangChain chain with callback
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = PromptTemplate.from_template("Tell me a joke about {topic}")
chain = LLMChain(llm=llm, prompt=prompt)

# 3. Run - everything is auto-tracked!
result = chain.invoke(
    {"topic": "programming"},
    config={"callbacks": [callback]}
)

print(result["text"])

# ✅ Automatically captured:
# - Prompt text (hashed for privacy)
# - Model name (gpt-4o-mini)
# - Provider (openai)
# - Token usage (prompt + completion)
# - Latency (ms)
# - Costs (calculated automatically)

Features

  • Zero-config instrumentation - Just add the callback
  • Automatic token tracking - Captures all LLM calls
  • Multi-provider support - OpenAI, Anthropic, any LangChain LLM
  • Chain tracking - Tracks entire chain executions
  • Cost calculation - Auto-calculates costs per model
  • Prompt profitability - Compare costs vs revenue
  • OpenTelemetry compatible - Standard observability

Advanced Usage

Track Custom Revenue

# Track revenue for profitability analysis
callback.track_revenue(
    event_name="chat_completion",
    revenue=0.50,  # What you charged the customer
    metadata={"subscription_tier": "pro"}
)

Use with Agents

from langchain.agents import initialize_agent, load_tools

tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(
    tools,
    llm,
    agent="zero-shot-react-description",
    callbacks=[callback]  # Add callback here
)

# All agent steps auto-tracked!
response = agent.run("What is 25% of 300?")

Use with Sequential Chains

from langchain.chains import SimpleSequentialChain

# All chain steps tracked automatically
overall_chain = SimpleSequentialChain(
    chains=[chain1, chain2, chain3],
    callbacks=[callback]
)

result = overall_chain.run(input_text)

Configuration

callback = AgentBillCallback(
    api_key="agb_...",           # Required - get from dashboard
    base_url="https://...",      # Required - your AgentBill instance
    customer_id="customer-123",  # Optional - for multi-tenant apps
    account_id="account-456",    # Optional - for account-level tracking
    debug=True,                  # Optional - enable debug logging
    batch_size=10,               # Optional - batch signals before sending
    flush_interval=5.0           # Optional - flush interval in seconds
)

How It Works

The callback hooks into LangChain's lifecycle:

  1. on_llm_start - Captures prompt, model, provider
  2. on_llm_end - Captures tokens, latency, response
  3. on_llm_error - Captures errors and retries
  4. on_chain_start - Tracks chain execution start
  5. on_chain_end - Tracks chain completion

All data is sent to AgentBill via the unified OTEL pipeline (otel-collector endpoint) with proper authentication.

Supported Models

Auto-cost calculation for:

  • OpenAI: GPT-4, GPT-4o, GPT-3.5-turbo, etc.
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, etc.
  • Any LangChain-compatible LLM

Troubleshooting

Not seeing data in dashboard?

  1. Check API key is correct
  2. Enable debug=True to see logs
  3. Verify base_url matches your instance
  4. Check network connectivity to AgentBill

Token counts are zero?

  • Some LLMs don't return token usage
  • Callback will estimate based on response length
  • OpenAI/Anthropic provide accurate counts

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentbill_py_langchain-10.4.0.tar.gz (23.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentbill_py_langchain-10.4.0-py3-none-any.whl (24.7 kB view details)

Uploaded Python 3

File details

Details for the file agentbill_py_langchain-10.4.0.tar.gz.

File metadata

  • Download URL: agentbill_py_langchain-10.4.0.tar.gz
  • Upload date:
  • Size: 23.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for agentbill_py_langchain-10.4.0.tar.gz
Algorithm Hash digest
SHA256 357bc7e5a044d0681d10d13dc6c24e260c3935b798305ed82853900ee71c24b6
MD5 ee64f9ece941520802cfd7db50fe9415
BLAKE2b-256 229545537789b293de0446f81d97f3ba7c2aefe3d99782e642ba03886f287b54

See more details on using hashes here.

File details

Details for the file agentbill_py_langchain-10.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agentbill_py_langchain-10.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bdb14923626d61e8c972fa168145f0e3130334912311360eb5646daf10a275dd
MD5 ce3f3a0d3e10106c6a8c86735b3fe6e3
BLAKE2b-256 e61a49cdea2c070ce45fdbc68009601b4cf18834c7b4644c0521c01347a88c5a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page