Skip to main content

Kelet SDK - OpenTelemetry integration for AI observability

Project description

Kelet

Automated Root Cause Analysis for AI Agents

Agent failures take weeks to diagnose manually. Kelet runs 24/7 deep diagnosis and suggests targeted fixes.

Kelet workflow

Kelet analyzes production failures 24/7. Each trace takes 15-25 minutes to debug manually—finding patterns requires analyzing hundreds of traces. That's weeks of engineering time per root cause. Kelet does this automatically, surfacing issues like data imbalance, concept drift, prompt poisoning, and model laziness hidden in production noise.


What Kelet Does

Kelet runs 24/7 analyzing every production trace:

  1. Captures every interaction, user signal, and failure context automatically
  2. Analyzes hundreds of failures in parallel to detect repeatable patterns
  3. Identifies root causes (data issues, prompt problems, model behavior)
  4. Delivers targeted fixes, not just dashboards

Unlike observability tools that show you data, Kelet analyzes it and tells you what to fix.

Not magic: Kelet is in alpha. Won't catch everything yet, needs your guidance sometimes. But it's already doing analysis that would take weeks manually.

Three lines of code to start.

Installation

Using uv (recommended):

uv add kelet

Or using pip:

pip install kelet

Set your API key:

export KELET_API_KEY=your_api_key
export KELET_PROJECT=production  # Required — create a project at console.kelet.ai

Or configure in code:

kelet.configure(
    api_key="your_api_key",
    project="production"  # Groups traces by project/environment
)

Quick Start

import kelet

kelet.configure()  # Auto-instruments pydantic-ai, Anthropic, OpenAI, LangChain/LangGraph, and LiteLLM; captures Google ADK OTEL spans

# Your agent code works as-is - instrumentation is automatic
result = await agent.run("Book a flight to NYC")

# Optionally capture user feedback
await kelet.signal(
    kind=kelet.SignalKind.FEEDBACK,
    source=kelet.SignalSource.HUMAN,
    score=0.0,  # User unhappy? Kelet analyzes why.
    # Best-effort by default: request failures warn and return.
    # Pass raise_on_failure=True if you want to surface them.
)

That's it. Kelet now runs 24/7 analyzing every trace, clustering failure patterns, and identifying root causes—work that would take weeks manually.

Manual Session Grouping (Optional)

If your framework doesn't support session tracking, or you want custom session IDs:

with kelet.agentic_session(session_id="user-123-request-456"):
    result = await agent.run("Book a flight to NYC")

Also works as a decorator:

@kelet.agentic_session(session_id="user-123-request-456")
async def handle_request():
    result = await agent.run("Book a flight to NYC")

But most users don't need this—instrumentation captures sessions automatically from pydantic-ai and other supported frameworks.

Using Different Projects Under the Same Application

If your application hosts multiple independent root agent systems that belong to different Kelet projects (for example customer_support_prod and billing_prod), you can override the global project on a per-session basis using the project parameter on agentic_session:

import kelet

kelet.configure(api_key="your_api_key", project="customer_support")

# Spans inside this session are attributed to "customer_support_prod"
async with kelet.agentic_session(session_id="sess-123", user_id="user-1", project="customer_support_prod"):
    result = await customer_support_agent.run("How do I return my order?")

# Spans inside this session are attributed to "billing_prod"
async with kelet.agentic_session(session_id="sess-456", user_id="user-2", project="billing_prod"):
    result = await billing_agent.run("Show my invoice for last month")

The project override is automatically propagated via W3C Baggage to any downstream services that use the Kelet SDK. Those services will stamp the correct kelet.project, session_id, and user_id on their spans without needing to call agentic_session themselves — the baggage carrier handles it transparently across process boundaries.

Agent Spans (Optional)

Use kelet.agent() to create an explicit OTEL span wrapping a named agent invocation. All LLM calls inside become children of that span, making your trace tree readable.

async with kelet.agentic_session(session_id="sess-123", user_id="user-1"):
    async with kelet.agent(name="support-bot"):
        result = await anthropic_client.messages.create(...)

Also works as a decorator:

@kelet.agentic_session(session_id="sess-123")
@kelet.agent(name="support-bot")
async def handle(request):
    return await anthropic_client.messages.create(...)

Multiple agents in one session are supported — each gets its own labeled span:

async with kelet.agentic_session(session_id="sess-123"):
    async with kelet.agent(name="classifier"):
        label = await openai_client.chat.completions.create(...)
    async with kelet.agent(name="responder"):
        reply = await anthropic_client.messages.create(...)

Auto-Instrumentation

kelet.configure() automatically detects supported libraries and configures available integrations — no extra code needed. This works whether Kelet creates the global TracerProvider or attaches to an existing one.

Library Install extra How it works
Pydantic AI (included) Instrumented automatically
Anthropic SDK pip install kelet[anthropic] OpenInference instrumentation
OpenAI SDK pip install kelet[openai] OpenInference instrumentation
LangChain / LangGraph pip install kelet[langchain] OpenInference instrumentation
LiteLLM pip install litellm Registers LiteLLM's native OTEL callback automatically and prefers per-request spans
Google ADK pip install google-adk kelet[google-adk] Prefers OpenInference instrumentation; falls back to native ADK OTEL spans

Install all OpenInference extras at once: pip install kelet[all]

If a library isn't installed, Kelet silently skips it — no errors.

Note on bare LiteLLM: Due to a LiteLLM limitation, session and agent context are not propagated natively into LiteLLM spans. If you call LiteLLM directly (not through another instrumented framework), wrap your calls with kelet.agentic_session(...) and/or kelet.agent(...) to group them. When LiteLLM is used inside another supported framework — for example Google ADK, which commonly uses LiteLLM under the hood — the parent span provides session/agent context and no extra wrapping is needed.

Easy Feedback UI for React

Building a React frontend? Use the Kelet Feedback UI component for instant implicit and explicit feedback collection. See the live demo and documentation for full integration guide.

Works with Your Observability Stack

Already using Logfire or another OTEL provider? Kelet integrates seamlessly:

import logfire
import kelet

logfire.configure()
logfire.instrument_pydantic_ai()

kelet.configure()  # Adds Kelet's processor and auto-instrumentation to your existing OTEL setup

What Gets Captured

Kelet is built on OpenTelemetry and supports multiple semantic conventions for AI/LLM observability:

Semantic Convention Supported Frameworks
GenAI Semantic Conventions Pydantic AI, LiteLLM, Google ADK, Langfuse SDK
Vercel AI SDK Next.js, Vercel AI
OpenInference Arize Phoenix
OpenLLMetry / Traceloop LangChain, LangGraph, LlamaIndex, OpenAI SDK, Anthropic SDK

Any framework that exports OpenTelemetry traces using the GenAI semantic conventions will work automatically.

Captured data includes:

  • LLM calls: Model, provider, tokens, latency, errors
  • Agent sessions: Multi-step interactions grouped by user session
  • Custom context: User IDs, session metadata, business-specific attributes

All captured automatically when you instrument with kelet.configure().


Configuration

Set via environment variables:

export KELET_API_KEY=your_api_key    # Required
export KELET_PROJECT=production      # Required — create a project at console.kelet.ai
export KELET_API_URL=https://...     # Optional, defaults to api.kelet.ai

Or pass directly to configure():

kelet.configure(
    api_key="your_api_key",
    project="production",
    auto_instrument=True  # Auto-instruments pydantic-ai, Anthropic, OpenAI, LangChain/LangGraph, and LiteLLM; captures Google ADK OTEL spans
)

API Reference

Core Functions:

# Initialize SDK
kelet.configure(api_key=None, project=None, auto_instrument=True, span_processor=None)

# Group operations by session for failure correlation
# Works as context manager (sync + async) and decorator
with kelet.agentic_session(session_id="session-id", user_id="user-id", project="project-override", env="production"):  # user_id optional; project overrides global config
    result = await agent.run(...)

# Wrap a named agent invocation in an explicit OTEL span
# Works as context manager (sync + async) and decorator
async with kelet.agent(name="my-agent"):
    result = await llm_client.messages.create(...)

# Capture user feedback
await kelet.signal(
    kind=kelet.SignalKind.FEEDBACK,       # feedback | edit | event | metric | arbitrary
    source=kelet.SignalSource.HUMAN,      # human | label | synthetic
    score=0.0,                            # 0.0 to 1.0
    # raise_on_failure=True,             # Optional: re-raise request failures
)

# Access current context
session_id = kelet.get_session_id()
trace_id = kelet.get_trace_id()
user_id = kelet.get_user_id()
agent_name = kelet.get_agent_name()      # Set by kelet.agent()
metadata = kelet.get_metadata_kwargs()   # Set by agentic_session(**kwargs)

# Manual shutdown (automatic on exit)
kelet.shutdown()

Production-Ready

The SDK never disrupts your application:

  • Async: Telemetry exports in background, zero blocking
  • Fail-safe: Telemetry export and signal() delivery are best-effort by default
  • Visible: Delivery failures warn in logs; pass raise_on_failure=True to signal() to surface them
  • Graceful: If Kelet is down, your agent keeps running
  • Auto-flush: Spans exported automatically on process exit

Alpha Status

Kelet is in alpha. What this means:

  • It works: Already analyzing thousands of production traces for early users
  • Not perfect: Won't catch every failure pattern yet, sometimes needs guidance
  • Improving fast: The AI learns from more production data every day
  • We need feedback: Help us make it better—tell us what it catches and what it misses

Even in alpha, Kelet does analysis that would take your team weeks to do manually.

The alternative? Manually analyzing 15-25 minutes per trace, across hundreds of failures, trying to spot patterns by hand. Most teams just don't do it—and ship broken agents.


Learn More

  • Website: kelet.ai
  • Early Access: We're onboarding teams with production AI agents
  • Support: GitHub Issues

Built for teams shipping mission-critical AI agents.


License

MIT License — see LICENSE.md for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kelet-1.4.1.tar.gz (317.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kelet-1.4.1-py3-none-any.whl (28.3 kB view details)

Uploaded Python 3

File details

Details for the file kelet-1.4.1.tar.gz.

File metadata

  • Download URL: kelet-1.4.1.tar.gz
  • Upload date:
  • Size: 317.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kelet-1.4.1.tar.gz
Algorithm Hash digest
SHA256 51950e476a509010378353c544638fa36129a24bad0cb38be8fb8c568d3dd796
MD5 b21a29ec665e72c08c536f66a79b4ad6
BLAKE2b-256 e9244771ebc1c27054860e964541cad1e822e6146ad8e49a74fc1ee72d58369b

See more details on using hashes here.

File details

Details for the file kelet-1.4.1-py3-none-any.whl.

File metadata

  • Download URL: kelet-1.4.1-py3-none-any.whl
  • Upload date:
  • Size: 28.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.7 {"installer":{"name":"uv","version":"0.11.7","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kelet-1.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 40a924943cfa7578b5e8615ca6857c40d04eea07402f00cfd4a4d6e7f82136b4
MD5 44f5f3c5da0a98686dd3a3540a8a74b6
BLAKE2b-256 3a3ceb2a008403d5cadb55065603854280ab6536f286e5a2f1f21bb5f4064cf2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page