Skip to main content

Unified LLM Observability & Multi-Model AI Integration Framework - Deploy to GPT, Claude, Gemini, Copilot with full telemetry.

Project description

Kalibr Python SDK

Production-grade observability and execution intelligence for LLM applications. Automatically instrument OpenAI, Anthropic, and Google AI SDKs with zero code changes.

PyPI version Python License

Features

  • Zero-code instrumentation - Automatic tracing for OpenAI, Anthropic, and Google AI SDKs
  • Outcome-conditioned routing - Query for optimal models based on historical success rates
  • TraceCapsule - Cross-agent context propagation for multi-agent systems
  • Cost tracking - Real-time cost calculation for all LLM calls
  • Token monitoring - Track input/output tokens across providers
  • Framework integrations - LangChain, CrewAI, OpenAI Agents SDK

Installation

pip install kalibr

Quick Start

Auto-instrumentation (Recommended)

Simply import kalibr at the start of your application - all LLM calls are automatically traced:

import kalibr  # Must be FIRST import
from openai import OpenAI

client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
# That's it. The call is automatically traced.

Manual Tracing with @trace Decorator

For more control, use the @trace decorator:

from kalibr import trace
from openai import OpenAI

@trace(operation="summarize", provider="openai", model="gpt-4o")
def summarize_text(text: str) -> str:
    client = OpenAI()
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "Summarize the following text."},
            {"role": "user", "content": text}
        ]
    )
    return response.choices[0].message.content

Multi-Provider Example

import kalibr
from openai import OpenAI
from anthropic import Anthropic

# Both are automatically traced
openai_client = OpenAI()
anthropic_client = Anthropic()

gpt_response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain quantum computing"}]
)

claude_response = anthropic_client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Explain machine learning"}]
)

Outcome-Conditioned Routing

Query Kalibr for optimal model recommendations based on real execution outcomes:

from kalibr import get_policy, report_outcome

# Before executing - get the best model for your goal
policy = get_policy(goal="book_meeting")
print(f"Use {policy['recommended_model']} - {policy['outcome_success_rate']:.0%} success rate")

# Execute with the recommended model
# ...

# After executing - report what happened
report_outcome(
    trace_id="abc123",
    goal="book_meeting",
    success=True
)

With Constraints

from kalibr import get_policy

policy = get_policy(
    goal="resolve_ticket",
    constraints={
        "max_cost_usd": 0.05,
        "max_latency_ms": 3000,
        "min_quality": 0.8
    }
)

Intelligent Routing with decide()

Register execution paths and let Kalibr decide the best strategy:

from kalibr import register_path, decide

# Register available paths
register_path(goal="book_meeting", model_id="gpt-4o", tool_id="calendar_api")
register_path(goal="book_meeting", model_id="claude-3-sonnet")

# Get intelligent routing decision
decision = decide(goal="book_meeting")
model = decision["model_id"]       # Selected based on outcomes
tool = decision.get("tool_id")     # If tool routing enabled
print(decision["exploration"])     # True if exploring new paths

Goal Context

Tag traces with goals for outcome tracking:

from kalibr import goal, set_goal, get_goal, clear_goal

# Context manager (recommended)
with goal("book_meeting"):
    response = openai.chat.completions.create(...)

# Or manual control
set_goal("book_meeting")
response = openai.chat.completions.create(...)
clear_goal()

TraceCapsule - Cross-Agent Tracing

Propagate trace context across agent boundaries:

from kalibr import TraceCapsule, get_or_create_capsule

# Agent 1: Create capsule and add hop
capsule = get_or_create_capsule()
capsule.append_hop({
    "provider": "openai",
    "operation": "chat_completion",
    "model": "gpt-4o",
    "duration_ms": 150,
    "cost_usd": 0.002,
    "status": "success"
})

# Pass to Agent 2 via HTTP header
headers = {"X-Kalibr-Capsule": capsule.to_json()}

# Agent 2: Receive and continue
capsule = TraceCapsule.from_json(headers["X-Kalibr-Capsule"])
capsule.append_hop({
    "provider": "anthropic",
    "operation": "chat_completion",
    "model": "claude-3-5-sonnet-20241022",
    "duration_ms": 200,
    "cost_usd": 0.003,
    "status": "success"
})

Framework Integrations

LangChain

pip install kalibr[langchain]
from kalibr_langchain import KalibrCallbackHandler
from langchain_openai import ChatOpenAI

handler = KalibrCallbackHandler()
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
response = llm.invoke("What is the capital of France?")

See LangChain Integration Guide for full documentation.

CrewAI

pip install kalibr[crewai]
from kalibr_crewai import KalibrCrewAIInstrumentor
from crewai import Agent, Task, Crew

instrumentor = KalibrCrewAIInstrumentor()
instrumentor.instrument()

# Use CrewAI normally - all operations are traced

See CrewAI Integration Guide for full documentation.

OpenAI Agents SDK

pip install kalibr[openai-agents]
from kalibr_openai_agents import setup_kalibr_tracing
from agents import Agent, Runner

setup_kalibr_tracing()

agent = Agent(name="Assistant", instructions="You are helpful.")
result = Runner.run_sync(agent, "Hello!")

See OpenAI Agents Integration Guide for full documentation.

Configuration

Configure via environment variables:

Variable Description Default
KALIBR_API_KEY API key for authentication Required
KALIBR_TENANT_ID Tenant identifier default
KALIBR_COLLECTOR_URL Collector endpoint URL https://api.kalibr.systems/api/ingest
KALIBR_INTELLIGENCE_URL Intelligence API URL https://dashboard.kalibr.systems/intelligence
KALIBR_SERVICE_NAME Service name for spans kalibr-app
KALIBR_ENVIRONMENT Environment (prod/staging/dev) prod
KALIBR_WORKFLOW_ID Workflow identifier default
KALIBR_AUTO_INSTRUMENT Enable auto-instrumentation true

CLI Commands

# Show version
kalibr version

# Validate configuration
kalibr validate

# Check connection status
kalibr status

# Package for deployment
kalibr package

# Update schemas
kalibr update_schemas

Supported Providers

Provider Models Auto-Instrumentation
OpenAI GPT-4, GPT-4o, GPT-3.5 Yes
Anthropic Claude 3.5 Sonnet, Claude 3 Opus/Sonnet/Haiku Yes
Google Gemini Pro, Gemini Flash Yes

Development

git clone https://github.com/kalibr-ai/kalibr-sdk-python.git
cd kalibr-sdk-python

pip install -e ".[dev]"

# Run tests
pytest

# Format code
black kalibr/
ruff check kalibr/

Contributing

We welcome contributions! See CONTRIBUTING.md.

License

Apache 2.0 - see LICENSE.

Links

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kalibr-1.2.2.tar.gz (84.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kalibr-1.2.2-py3-none-any.whl (96.4 kB view details)

Uploaded Python 3

File details

Details for the file kalibr-1.2.2.tar.gz.

File metadata

  • Download URL: kalibr-1.2.2.tar.gz
  • Upload date:
  • Size: 84.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.1

File hashes

Hashes for kalibr-1.2.2.tar.gz
Algorithm Hash digest
SHA256 a7e2e67e6895852066fec69cd8fd8561f5d8e5dcd980809172d0be76af519f48
MD5 07050297f37f91b50ec34385377a0fca
BLAKE2b-256 98b7ec462f691a38ed27f33e7a4cd1780f09b5739ca39e25ffd88efd57d0f01a

See more details on using hashes here.

File details

Details for the file kalibr-1.2.2-py3-none-any.whl.

File metadata

  • Download URL: kalibr-1.2.2-py3-none-any.whl
  • Upload date:
  • Size: 96.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.1

File hashes

Hashes for kalibr-1.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 63d5066e8e564e85f00cf98166fd7ea3fa46d70329a6d0b58ec3a8f18431090e
MD5 4f148432d6824a1ac36843b5f3f5e664
BLAKE2b-256 dac495eb7c53a65e343493840b54d906c57809bdb9c54138d40fbaae90d12310

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page