Skip to main content

Kelet SDK - OpenTelemetry integration for AI observability

Project description

Kelet

Automated Root Cause Analysis for AI Agents

Agent failures take weeks to diagnose manually. Kelet runs 24/7 deep diagnosis and suggests targeted fixes.

Kelet workflow

Kelet analyzes production failures 24/7. Each trace takes 15-25 minutes to debug manually—finding patterns requires analyzing hundreds of traces. That's weeks of engineering time per root cause. Kelet does this automatically, surfacing issues like data imbalance, concept drift, prompt poisoning, and model laziness hidden in production noise.


What Kelet Does

Kelet runs 24/7 analyzing every production trace:

  1. Captures every interaction, user signal, and failure context automatically
  2. Analyzes hundreds of failures in parallel to detect repeatable patterns
  3. Identifies root causes (data issues, prompt problems, model behavior)
  4. Delivers targeted fixes, not just dashboards

Unlike observability tools that show you data, Kelet analyzes it and tells you what to fix.

Not magic: Kelet is in alpha. Won't catch everything yet, needs your guidance sometimes. But it's already doing analysis that would take weeks manually.

Three lines of code to start.

Installation

Using uv (recommended):

uv add kelet

Or using pip:

pip install kelet

Set your API key:

export KELET_API_KEY=your_api_key
export KELET_PROJECT=production  # Optional: organize traces by environment

Or configure in code:

kelet.configure(
    api_key="your_api_key",
    project="production"  # Groups traces by project/environment
)

Quick Start

import kelet

kelet.configure()  # Auto-instruments pydantic-ai, Anthropic SDK, OpenAI SDK, LangChain/LangGraph

# Your agent code works as-is - instrumentation is automatic
result = await agent.run("Book a flight to NYC")

# Optionally capture user feedback
await kelet.signal(
    kind=kelet.SignalKind.FEEDBACK,
    source=kelet.SignalSource.HUMAN,
    score=0.0,  # User unhappy? Kelet analyzes why.
)

That's it. Kelet now runs 24/7 analyzing every trace, clustering failure patterns, and identifying root causes—work that would take weeks manually.

Manual Session Grouping (Optional)

If your framework doesn't support session tracking, or you want custom session IDs:

with kelet.agentic_session(session_id="user-123-request-456"):
    result = await agent.run("Book a flight to NYC")

Also works as a decorator:

@kelet.agentic_session(session_id="user-123-request-456")
async def handle_request():
    result = await agent.run("Book a flight to NYC")

But most users don't need this—instrumentation captures sessions automatically from pydantic-ai and other supported frameworks.

Using Different Projects Under the Same Application

If your application hosts multiple independent root agent systems that belong to different Kelet projects (for example customer_support_prod and billing_prod), you can override the global project on a per-session basis using the project parameter on agentic_session:

import kelet

kelet.configure(api_key="your_api_key", project="default-project")

# Spans inside this session are attributed to "customer_support_prod"
async with kelet.agentic_session(session_id="sess-123", user_id="user-1", project="customer_support_prod"):
    result = await customer_support_agent.run("How do I return my order?")

# Spans inside this session are attributed to "billing_prod"
async with kelet.agentic_session(session_id="sess-456", user_id="user-2", project="billing_prod"):
    result = await billing_agent.run("Show my invoice for last month")

The project override is automatically propagated via W3C Baggage to any downstream services that use the Kelet SDK. Those services will stamp the correct kelet.project, session_id, and user_id on their spans without needing to call agentic_session themselves — the baggage carrier handles it transparently across process boundaries.

Agent Spans (Optional)

Use kelet.agent() to create an explicit OTEL span wrapping a named agent invocation. All LLM calls inside become children of that span, making your trace tree readable.

async with kelet.agentic_session(session_id="sess-123", user_id="user-1"):
    async with kelet.agent(name="support-bot"):
        result = await anthropic_client.messages.create(...)

Also works as a decorator:

@kelet.agentic_session(session_id="sess-123")
@kelet.agent(name="support-bot")
async def handle(request):
    return await anthropic_client.messages.create(...)

Multiple agents in one session are supported — each gets its own labeled span:

async with kelet.agentic_session(session_id="sess-123"):
    async with kelet.agent(name="classifier"):
        label = await openai_client.chat.completions.create(...)
    async with kelet.agent(name="responder"):
        reply = await anthropic_client.messages.create(...)

Auto-Instrumentation

kelet.configure() automatically detects and instruments supported libraries — no extra code needed:

Library Install extra
Pydantic AI (included)
Anthropic SDK pip install kelet[anthropic]
OpenAI SDK pip install kelet[openai]
LangChain / LangGraph pip install kelet[langchain]

All four at once: pip install kelet[all]

If a library isn't installed, Kelet silently skips it — no errors.

Easy Feedback UI for React

Building a React frontend? Use the Kelet Feedback UI component for instant implicit and explicit feedback collection. See the live demo and documentation for full integration guide.

Works with Your Observability Stack

Already using Logfire or another OTEL provider? Kelet integrates seamlessly:

import logfire
import kelet

logfire.configure()
logfire.instrument_pydantic_ai()

kelet.configure()  # Adds Kelet's processor to your existing setup

What Gets Captured

Kelet is built on OpenTelemetry and supports multiple semantic conventions for AI/LLM observability:

Semantic Convention Supported Frameworks
GenAI Semantic Conventions Pydantic AI, LiteLLM, Langfuse SDK
Vercel AI SDK Next.js, Vercel AI
OpenInference Arize Phoenix
OpenLLMetry / Traceloop LangChain, LangGraph, LlamaIndex, OpenAI SDK, Anthropic SDK

Any framework that exports OpenTelemetry traces using the GenAI semantic conventions will work automatically.

Captured data includes:

  • LLM calls: Model, provider, tokens, latency, errors
  • Agent sessions: Multi-step interactions grouped by user session
  • Custom context: User IDs, session metadata, business-specific attributes

All captured automatically when you instrument with kelet.configure().


Configuration

Set via environment variables:

export KELET_API_KEY=your_api_key    # Required
export KELET_PROJECT=production      # Optional, defaults to "default"
export KELET_API_URL=https://...     # Optional, defaults to api.kelet.ai

Or pass directly to configure():

kelet.configure(
    api_key="your_api_key",
    project="production",
    auto_instrument=True  # Auto-instruments pydantic-ai, Anthropic, OpenAI, LangChain/LangGraph
)

API Reference

Core Functions:

# Initialize SDK
kelet.configure(api_key=None, project=None, auto_instrument=True)

# Group operations by session for failure correlation
# Works as context manager (sync + async) and decorator
with kelet.agentic_session(session_id="session-id", user_id="user-id", project="project-override"):  # project optional
    result = await agent.run(...)

# Wrap a named agent invocation in an explicit OTEL span
# Works as context manager (sync + async) and decorator
async with kelet.agent(name="my-agent"):
    result = await llm_client.messages.create(...)

# Capture user feedback
await kelet.signal(
    kind=kelet.SignalKind.FEEDBACK,       # feedback | edit | event | metric | arbitrary
    source=kelet.SignalSource.HUMAN,      # human | label | synthetic
    score=0.0,                            # 0.0 to 1.0
)

# Access current context
session_id = kelet.get_session_id()
trace_id = kelet.get_trace_id()
user_id = kelet.get_user_id()
agent_name = kelet.get_agent_name()  # Set by kelet.agent()

# Manual shutdown (automatic on exit)
kelet.shutdown()

Production-Ready

The SDK never disrupts your application:

  • Async: Telemetry exports in background, zero blocking
  • Fail-safe: Network errors handled silently, no exceptions raised
  • Graceful: If Kelet is down, your agent keeps running
  • Auto-flush: Spans exported automatically on process exit

Alpha Status

Kelet is in alpha. What this means:

  • It works: Already analyzing thousands of production traces for early users
  • Not perfect: Won't catch every failure pattern yet, sometimes needs guidance
  • Improving fast: The AI learns from more production data every day
  • We need feedback: Help us make it better—tell us what it catches and what it misses

Even in alpha, Kelet does analysis that would take your team weeks to do manually.

The alternative? Manually analyzing 15-25 minutes per trace, across hundreds of failures, trying to spot patterns by hand. Most teams just don't do it—and ship broken agents.


Learn More

  • Website: kelet.ai
  • Early Access: We're onboarding teams with production AI agents
  • Support: GitHub Issues

Built for teams shipping mission-critical AI agents.


License

MIT License — see LICENSE.md for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kelet-1.2.0.tar.gz (304.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kelet-1.2.0-py3-none-any.whl (22.9 kB view details)

Uploaded Python 3

File details

Details for the file kelet-1.2.0.tar.gz.

File metadata

  • Download URL: kelet-1.2.0.tar.gz
  • Upload date:
  • Size: 304.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kelet-1.2.0.tar.gz
Algorithm Hash digest
SHA256 108380a9e54a88b32e15e21666386b9bdaa5617579980253ad4fa597ddc14473
MD5 06db91687a66f69f00b7f78c5acbb553
BLAKE2b-256 0aa78af07fcbacd206d93a80a344e1fe3be951a2a65b33923c8c7280a4713ec9

See more details on using hashes here.

File details

Details for the file kelet-1.2.0-py3-none-any.whl.

File metadata

  • Download URL: kelet-1.2.0-py3-none-any.whl
  • Upload date:
  • Size: 22.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kelet-1.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d28b0f82a39be74b93d2b9e77715ea873b4e7a252277e1758e75c0cdf51d4f86
MD5 6d2e1246dc97560d8d9090de4420cdaf
BLAKE2b-256 2eaba183a44da3edef5c6e5ab9f625c7689e6cdc092ac17c2d4f79955156c6df

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page