Skip to main content

Kelet SDK - OpenTelemetry integration for AI observability

Project description

Kelet

Automated Root Cause Analysis for AI Agents

Agent failures take weeks to diagnose manually. Kelet runs 24/7 deep diagnosis and suggests targeted fixes.

Kelet workflow

Kelet analyzes production failures 24/7. Each trace takes 15-25 minutes to debug manually—finding patterns requires analyzing hundreds of traces. That's weeks of engineering time per root cause. Kelet does this automatically, surfacing issues like data imbalance, concept drift, prompt poisoning, and model laziness hidden in production noise.


What Kelet Does

Kelet runs 24/7 analyzing every production trace:

  1. Captures every interaction, user signal, and failure context automatically
  2. Analyzes hundreds of failures in parallel to detect repeatable patterns
  3. Identifies root causes (data issues, prompt problems, model behavior)
  4. Delivers targeted fixes, not just dashboards

Unlike observability tools that show you data, Kelet analyzes it and tells you what to fix.

Not magic: Kelet is in alpha. Won't catch everything yet, needs your guidance sometimes. But it's already doing analysis that would take weeks manually.

Three lines of code to start.

Installation

Using uv (recommended):

uv add kelet

Or using pip:

pip install kelet

Set your API key:

export KELET_API_KEY=your_api_key
export KELET_PROJECT=production  # Optional: organize traces by environment

Or configure in code:

kelet.configure(
    api_key="your_api_key",
    project="production"  # Groups traces by project/environment
)

Quick Start

import kelet

kelet.configure()  # Auto-instruments pydantic-ai and captures sessions

# Your agent code works as-is - instrumentation is automatic
result = await agent.run("Book a flight to NYC")

# Optionally capture user feedback
await kelet.signal(
    kind=kelet.SignalKind.FEEDBACK,
    source=kelet.SignalSource.HUMAN,
    score=0.0,  # User unhappy? Kelet analyzes why.
)

That's it. Kelet now runs 24/7 analyzing every trace, clustering failure patterns, and identifying root causes—work that would take weeks manually.

Manual Session Grouping (Optional)

If your framework doesn't support session tracking, or you want custom session IDs:

with kelet.agentic_session(session_id="user-123-request-456"):
    result = await agent.run("Book a flight to NYC")

But most users don't need this—instrumentation captures sessions automatically from pydantic-ai and other supported frameworks.

Easy Feedback UI for React

Building a React frontend? Use the Kelet Feedback UI component for instant implicit and explicit feedback collection. See the live demo and documentation for full integration guide.

Works with Your Observability Stack

Already using Logfire or another OTEL provider? Kelet integrates seamlessly:

import logfire
import kelet

logfire.configure()
logfire.instrument_pydantic_ai()

kelet.configure()  # Adds Kelet's processor to your existing setup

What Gets Captured

Kelet is built on OpenTelemetry and supports multiple semantic conventions for AI/LLM observability:

Semantic Convention Supported Frameworks
GenAI Semantic Conventions Pydantic AI, LiteLLM, Langfuse SDK
Vercel AI SDK Next.js, Vercel AI
OpenInference Arize Phoenix
OpenLLMetry / Traceloop LangChain, LangGraph, LlamaIndex, OpenAI SDK, Anthropic SDK

Any framework that exports OpenTelemetry traces using the GenAI semantic conventions will work automatically.

Captured data includes:

  • LLM calls: Model, provider, tokens, latency, errors
  • Agent sessions: Multi-step interactions grouped by user session
  • Custom context: User IDs, session metadata, business-specific attributes

All captured automatically when you instrument with kelet.configure().


Configuration

Set via environment variables:

export KELET_API_KEY=your_api_key    # Required
export KELET_PROJECT=production      # Optional, defaults to "default"
export KELET_API_URL=https://...     # Optional, defaults to api.kelet.ai

Or pass directly to configure():

kelet.configure(
    api_key="your_api_key",
    project="production",
    auto_instrument=True  # Instruments pydantic-ai automatically
)

API Reference

Core Functions:

# Initialize SDK
kelet.configure(api_key=None, project=None, auto_instrument=True)

# Group operations by session for failure correlation
with kelet.agentic_session(session_id="session-id"):
    result = await agent.run(...)

# Capture user feedback
await kelet.signal(
    kind=kelet.SignalKind.FEEDBACK,       # feedback | edit | event | metric | arbitrary
    source=kelet.SignalSource.HUMAN,      # human | label | synthetic
    score=0.0,                            # 0.0 to 1.0
)

# Access current context
session_id = kelet.get_session_id()
trace_id = kelet.get_trace_id()
user_id = kelet.get_user_id()

# Manual shutdown (automatic on exit)
kelet.shutdown()

Production-Ready

The SDK never disrupts your application:

  • Async: Telemetry exports in background, zero blocking
  • Fail-safe: Network errors handled silently, no exceptions raised
  • Graceful: If Kelet is down, your agent keeps running
  • Auto-flush: Spans exported automatically on process exit

Alpha Status

Kelet is in alpha. What this means:

  • It works: Already analyzing thousands of production traces for early users
  • Not perfect: Won't catch every failure pattern yet, sometimes needs guidance
  • Improving fast: The AI learns from more production data every day
  • We need feedback: Help us make it better—tell us what it catches and what it misses

Even in alpha, Kelet does analysis that would take your team weeks to do manually.

The alternative? Manually analyzing 15-25 minutes per trace, across hundreds of failures, trying to spot patterns by hand. Most teams just don't do it—and ship broken agents.


Learn More

  • Website: kelet.ai
  • Early Access: We're onboarding teams with production AI agents
  • Support: GitHub Issues

Built for teams shipping mission-critical AI agents.


License

MIT License — see LICENSE.md for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kelet-1.1.0.tar.gz (301.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

kelet-1.1.0-py3-none-any.whl (20.9 kB view details)

Uploaded Python 3

File details

Details for the file kelet-1.1.0.tar.gz.

File metadata

  • Download URL: kelet-1.1.0.tar.gz
  • Upload date:
  • Size: 301.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kelet-1.1.0.tar.gz
Algorithm Hash digest
SHA256 207dffa0e644fe3e6a2528255f2ba5f2ebb194a63574a1dd4dee57e106b76290
MD5 af6826a62b0b35e1f113c10910aaba8f
BLAKE2b-256 e85a76787bf835917c26f204b4a82c51ccfa19dba6ba08ca5d6e21e129ac5955

See more details on using hashes here.

File details

Details for the file kelet-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: kelet-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 20.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.10.12 {"installer":{"name":"uv","version":"0.10.12","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for kelet-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8ce0653de61f3ad1c890101c223033b1fe511e76c5fb86ba305fb2857acfb950
MD5 1df4674ebeba952af1ed77669a27d81d
BLAKE2b-256 e188535fc8189dbf8ce2e966e5a5d63e8b30d10178e9d918f97c21c0e2c72643

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page